Class StartMedicalStreamTranscriptionRequest

    • Method Detail

      • languageCode

        public final LanguageCode languageCode()

        Specify the language code that represents the language spoken in your audio.

        Amazon Transcribe Medical only supports US English (en-US).

        If the service returns an enum value that is not available in the current SDK version, languageCode will return LanguageCode.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from languageCodeAsString().

        Returns:
        Specify the language code that represents the language spoken in your audio.

        Amazon Transcribe Medical only supports US English (en-US).

        See Also:
        LanguageCode
      • languageCodeAsString

        public final String languageCodeAsString()

        Specify the language code that represents the language spoken in your audio.

        Amazon Transcribe Medical only supports US English (en-US).

        If the service returns an enum value that is not available in the current SDK version, languageCode will return LanguageCode.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from languageCodeAsString().

        Returns:
        Specify the language code that represents the language spoken in your audio.

        Amazon Transcribe Medical only supports US English (en-US).

        See Also:
        LanguageCode
      • mediaSampleRateHertz

        public final Integer mediaSampleRateHertz()

        The sample rate of the input audio (in hertz). Amazon Transcribe Medical supports a range from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.

        Returns:
        The sample rate of the input audio (in hertz). Amazon Transcribe Medical supports a range from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.
      • mediaEncoding

        public final MediaEncoding mediaEncoding()

        Specify the encoding used for the input audio. Supported formats are:

        • FLAC

        • OPUS-encoded audio in an Ogg container

        • PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

        For more information, see Media formats.

        If the service returns an enum value that is not available in the current SDK version, mediaEncoding will return MediaEncoding.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from mediaEncodingAsString().

        Returns:
        Specify the encoding used for the input audio. Supported formats are:

        • FLAC

        • OPUS-encoded audio in an Ogg container

        • PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

        For more information, see Media formats.

        See Also:
        MediaEncoding
      • mediaEncodingAsString

        public final String mediaEncodingAsString()

        Specify the encoding used for the input audio. Supported formats are:

        • FLAC

        • OPUS-encoded audio in an Ogg container

        • PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

        For more information, see Media formats.

        If the service returns an enum value that is not available in the current SDK version, mediaEncoding will return MediaEncoding.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from mediaEncodingAsString().

        Returns:
        Specify the encoding used for the input audio. Supported formats are:

        • FLAC

        • OPUS-encoded audio in an Ogg container

        • PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

        For more information, see Media formats.

        See Also:
        MediaEncoding
      • vocabularyName

        public final String vocabularyName()

        Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.

        Returns:
        Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.
      • specialty

        public final Specialty specialty()

        Specify the medical specialty contained in your audio.

        If the service returns an enum value that is not available in the current SDK version, specialty will return Specialty.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from specialtyAsString().

        Returns:
        Specify the medical specialty contained in your audio.
        See Also:
        Specialty
      • specialtyAsString

        public final String specialtyAsString()

        Specify the medical specialty contained in your audio.

        If the service returns an enum value that is not available in the current SDK version, specialty will return Specialty.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from specialtyAsString().

        Returns:
        Specify the medical specialty contained in your audio.
        See Also:
        Specialty
      • type

        public final Type type()

        Specify the type of input audio. For example, choose DICTATION for a provider dictating patient notes and CONVERSATION for a dialogue between a patient and a medical professional.

        If the service returns an enum value that is not available in the current SDK version, type will return Type.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from typeAsString().

        Returns:
        Specify the type of input audio. For example, choose DICTATION for a provider dictating patient notes and CONVERSATION for a dialogue between a patient and a medical professional.
        See Also:
        Type
      • typeAsString

        public final String typeAsString()

        Specify the type of input audio. For example, choose DICTATION for a provider dictating patient notes and CONVERSATION for a dialogue between a patient and a medical professional.

        If the service returns an enum value that is not available in the current SDK version, type will return Type.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from typeAsString().

        Returns:
        Specify the type of input audio. For example, choose DICTATION for a provider dictating patient notes and CONVERSATION for a dialogue between a patient and a medical professional.
        See Also:
        Type
      • showSpeakerLabel

        public final Boolean showSpeakerLabel()

        Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

        For more information, see Partitioning speakers (diarization).

        Returns:
        Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

        For more information, see Partitioning speakers (diarization).

      • sessionId

        public final String sessionId()

        Specify a name for your transcription session. If you don't include this parameter in your request, Amazon Transcribe Medical generates an ID and returns it in the response.

        Returns:
        Specify a name for your transcription session. If you don't include this parameter in your request, Amazon Transcribe Medical generates an ID and returns it in the response.
      • enableChannelIdentification

        public final Boolean enableChannelIdentification()

        Enables channel identification in multi-channel audio.

        Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.

        If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript is not separated by channel.

        If you include EnableChannelIdentification in your request, you must also include NumberOfChannels.

        For more information, see Transcribing multi-channel audio.

        Returns:
        Enables channel identification in multi-channel audio.

        Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.

        If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript is not separated by channel.

        If you include EnableChannelIdentification in your request, you must also include NumberOfChannels.

        For more information, see Transcribing multi-channel audio.

      • numberOfChannels

        public final Integer numberOfChannels()

        Specify the number of channels in your audio stream. This value must be 2, as only two channels are supported. If your audio doesn't contain multiple channels, do not include this parameter in your request.

        If you include NumberOfChannels in your request, you must also include EnableChannelIdentification.

        Returns:
        Specify the number of channels in your audio stream. This value must be 2, as only two channels are supported. If your audio doesn't contain multiple channels, do not include this parameter in your request.

        If you include NumberOfChannels in your request, you must also include EnableChannelIdentification.

      • toString

        public final String toString()
        Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be redacted from this string using a placeholder value.
        Overrides:
        toString in class Object