Optional
additionalAdditional inference parameters that the model supports, beyond the
base set of inference parameters that the Converse API supports in the inferenceConfig
field. For more information, see the model parameters link below.
Optional
credentialsAWS Credentials. If no credentials are provided, the default credentials from
@aws-sdk/credential-provider-node
will be used.
Optional
endpointOverride the default endpoint hostname.
Optional
guardrailConfiguration information for a guardrail that you want to use in the request.
Optional
maxMax tokens.
Optional
modelModel to use. For example, "anthropic.claude-3-haiku-20240307-v1:0", this is equivalent to the modelId property in the list-foundation-models api. See the below link for a full list of models.
https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html#model-ids-arns
anthropic.claude-3-haiku-20240307-v1:0
Optional
regionThe AWS region e.g. us-west-2
.
Fallback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config
in case it is not provided here.
Optional
streamWhether or not to include usage data, like token counts in the streamed response chunks. Passing as a call option will take precedence over the class-level setting.
true
Optional
streamingWhether or not to stream responses
Optional
temperatureTemperature.
Optional
topPThe percentage of most-likely candidates that the model considers for the next token. For
example, if you choose a value of 0.8 for topP
, the model selects from the top 80% of the
probability distribution of tokens that could be next in the sequence.
The default value is the default value for the model that you are using.
For more information, see the inference parameters for foundation models link below.
Generated using TypeDoc
Inputs for ChatBedrockConverse.