Alexa Skills Kit Adapter

The Jargon Platform Alexa Skills Kit Adapter (@jargon/platform-alexa-skill-kit) integrates the Jargon Platform SDK with voice applications built on top of the Alexa Skills Kit SDK for Node.js framework.

Requirements

The Alexa Skills Kit Adapter works with Amazon Alexa skills that are built using the ASK SDK v2 for Node.js.

The minimum Node.js version is 8.10. If you're deploying your voice application on AWS Lambda all current Node.js runtimes are supported.

Installation

Using your preferred package manager (npm or yarn), use the following as the package name to add the Jargon Platform Alexa Skills Kit Adapter as a dependency:

@jargon/platform-alexa-skill-kit

Initialization

const Jargon = require('@jargon/platform-alexa-skill-kit')

const skillBuilder = new Jargon.JargonSkillBuilder().installOnto(Alexa.SkillBuilders.custom())

It's best to construct your skill builder when your application starts up, and not within a request or handler function. While creating the skill builder per-request will work, it prevents the Jargon Platform SDK from being able to cache and re-use data across requests, and thus will increase latency for an application instance than handles more than a single request.

Runtime interface

JargonResponseBuilder

The core class you'll work with. JargonResponseBuilder mirrors the ASK SDK response builder, but changes string parameters containing content presented to users to a RenderItem. JargonResponseBuilder also has a method to add a structured Jargon response; using structured responses lets your voice application focus on its business logic.

By default the speak and reprompt methods replace the content from previous calls to those methods; this behavior mirrors that of corresponding ASK SDK methods. There are two ways to change this behavior such to multiple calls to result in content getting merged (with a space in between) instead of replaced: 1. When constructing the JargonSkillBuilder (described below) pass in an options object with mergeSpeakAndReprompt set to true 2. Providing a ResponseGenerationOptions object to the speak or reprompt method with merge set to true

When mergeSpeakAndReprompt is true the default replace behavior can be used for specific calls to speak or reprompt by providing a ResponseGenerationOptions object with merge set to false

Note that each individual call to speak or reprompt should contain content that can stand alone (e.g., a full sentence or paragraph) to minimize the chances that the order of the content would change across languages.

/**
 * Options that modify the default response builder behavior for a specific call.
 * These flags may be Jargon-specific (such as merge) or come from the ASK response
 * builder (playBehavior).
 */
export interface ResponseGenerationOptions {
  /**
   * If provided, overrides the mergeSpeakAndReprompt setting in the response builder's options.
   * True merges the rendered content with previously rendered content; false replaces any previous content
   */
  merge?: boolean
  /** Optional playBehavior parameter for speak and reprompt (for ASK v2.4.0 and greater) */
  playBehavior?: ui.PlayBehavior
}

/**
 * JargonResponseBuilder mirrors the ASK response builder, but takes RenderItems instead
 * of raw strings for parameters that end up being spoken or displayed to the end user.
 */
export interface JargonResponseBuilder {
  /**
   * Add a Jargon response to the output. This response may include multiple components (speak, reprompt, etc.)
   * @param {RenderItem} response The item to render for the response content
   * @param {ResponseGenerationOptions} options Options to control response generation behavior
   * @returns {ResponseBuilder}
   */
  withJargonResponse (response: RenderItem, options?: ResponseGenerationOptions): this

  /**
   * Has Alexa say the provided speech to the user
   * @param {RenderItem} speechOutput The item to render for the speech content
   * @param {ResponseGenerationOptions} options Options to control response generation behavior
   * @returns {ResponseBuilder}
   */

  speak (speechOutput: RenderItem, options?: ResponseGenerationOptions): this
  /**
   * Has alexa listen for speech from the user. If the user doesn't respond within 8 seconds
   * then has alexa reprompt with the provided reprompt speech
   * @param {RenderItem} repromptSpeechOutput The item to render for the reprompt content
   * @param {ResponseGenerationOptions} options Options to control response generation behavior
   * @returns {ResponseBuilder}
   */

  reprompt (repromptSpeechOutput: RenderItem, options?: ResponseGenerationOptions): this
  /**
   * Renders a simple card with the following title and content
   * @param {RenderItem} cardTitle
   * @param {RenderItem} cardContent
   * @returns {JargonResponseBuilder}
   */

  withSimpleCard (cardTitle: RenderItem, cardContent: RenderItem): this
  /**
   * Renders a standard card with the following title, content and image
   * @param {RenderItem} cardTitle
   * @param {RenderItem} cardContent
   * @param {string} smallImageUrl
   * @param {string} largeImageUrl
   * @returns {JargonResponseBuilder}
   */

  withStandardCard (cardTitle: RenderItem, cardContent: RenderItem, smallImageUrl?: string, largeImageUrl?: string): this

  /**
   * Renders a link account card
   * @returns {JargonResponseBuilder}
   */
  withLinkAccountCard (): this

  /**
   * Renders an askForPermissionsConsent card
   * @param {string[]} permissionArray
   * @returns {JargonResponseBuilder}
   */
  withAskForPermissionsConsentCard (permissionArray: string[]): this

  /**
   * Adds a Dialog delegate directive to response
   * @param {Intent} updatedIntent
   * @returns {JargonResponseBuilder}
   */
  addDelegateDirective (updatedIntent?: Intent): this

  /**
   * Adds a Dialog elicitSlot directive to response
   * @param {string} slotToElicit
   * @param {Intent} updatedIntent
   * @returns {JargonResponseBuilder}
   */
  addElicitSlotDirective (slotToElicit: string, updatedIntent?: Intent): this

  /**
   * Adds a Dialog confirmSlot directive to response
   * @param {string} slotToConfirm
   * @param {Intent} updatedIntent
   * @returns {JargonResponseBuilder}
   */
  addConfirmSlotDirective (slotToConfirm: string, updatedIntent?: Intent): this

  /**
   * Adds a Dialog confirmIntent directive to response
   * @param {Intent} updatedIntent
   * @returns {JargonResponseBuilder}
   */
  addConfirmIntentDirective (updatedIntent?: Intent): this

  /**
   * Adds an AudioPlayer play directive
   * @param {interfaces.audioplayer.PlayBehavior} playBehavior Describes playback behavior. Accepted values:
   * REPLACE_ALL: Immediately begin playback of the specified stream, and replace current and enqueued streams.
   * ENQUEUE: Add the specified stream to the end of the current queue.
   * This does not impact the currently playing stream.
   * REPLACE_ENQUEUED: Replace all streams in the queue. This does not impact the currently playing stream.
   * @param {string} url Identifies the location of audio content at a remote HTTPS location.
   * The audio file must be hosted at an Internet-accessible HTTPS endpoint.
   * HTTPS is required, and the domain hosting the files must present a valid, trusted SSL certificate.
   * Self-signed certificates cannot be used.
   * The supported formats for the audio file include AAC/MP4, MP3, HLS, PLS and M3U. Bitrates: 16kbps to 384 kbps.
   * @param {string} token A token that represents the audio stream. This token cannot exceed 1024 characters
   * @param {number} offsetInMilliseconds The timestamp in the stream from which Alexa should begin playback.
   * Set to 0 to start playing the stream from the beginning.
   * Set to any other value to start playback from that associated point in the stream
   * @param {string} expectedPreviousToken A token that represents the expected previous stream.
   * This property is required and allowed only when the playBehavior is ENQUEUE.
   * This is used to prevent potential race conditions if requests to progress
   * through a playlist and change tracks occur at the same time.
   * @param {interfaces.audioplayer.AudioItemMetadata} audioItemMetadata Metadata that can be displayed on screen enabled devices
   * @returns {JargonResponseBuilder}
   */
  addAudioPlayerPlayDirective (playBehavior: interfaces.audioplayer.PlayBehavior, url: string, token: string, offsetInMilliseconds: number, expectedPreviousToken?: string, audioItemMetadata?: AudioItemMetadata): this

  /**
   * Adds an AudioPlayer Stop directive - Stops the current audio Playback
   * @returns {JargonResponseBuilder}
   */
  addAudioPlayerStopDirective (): this

  /**
   * Adds an AudioPlayer ClearQueue directive - clear the queue without stopping the currently playing stream,
   * or clear the queue and stop any currently playing stream.
   *
   * @param {interfaces.audioplayer.ClearBehavior} clearBehavior Describes the clear queue behavior.
   * Accepted values:
   * CLEAR_ENQUEUED: clears the queue and continues to play the currently playing stream
   * CLEAR_ALL: clears the entire playback queue and stops the currently playing stream (if applicable).
   * @returns {JargonResponseBuilder}
   */

  addAudioPlayerClearQueueDirective (clearBehavior: interfaces.audioplayer.ClearBehavior): this
  /**
   * Adds a Display RenderTemplate Directive
   * @param {interfaces.display.Template} template
   * @returns {JargonResponseBuilder}
   */

  addRenderTemplateDirective (template: interfaces.display.Template): this
  /**
   * Adds a hint directive - show a hint on the screen of the echo show
   * @param {RenderItem} text plain text to show on the hint
   * @returns {JargonResponseBuilder}
   */
  addHintDirective (text: RenderItem): this

  /**
   * Adds a VideoApp play directive to play a video
   *
   * @param {string} source Identifies the location of video content at a remote HTTPS location.
   * The video file must be hosted at an Internet-accessible HTTPS endpoint.
   * @param {RenderItem} title (optional) title that can be displayed on VideoApp.
   * @param {RenderItem} subtitle (optional) subtitle that can be displayed on VideoApp.
   * @returns {JargonResponseBuilder}
   */
  addVideoAppLaunchDirective (source: string, title?: RenderItem, subtitle?: RenderItem): this

  /**
   * Adds canFulfillIntent to response.
   * @param {canfulfill.CanFulfillIntent} canFulfillIntent
   * @return {ResponseBuilder}
   */
  withCanFulfillIntent (canFulfillIntent: canfulfill.CanFulfillIntent): this

  /**
   * Sets shouldEndSession value to null/false/true
   * @param {boolean} val
   * @returns {JargonResponseBuilder}
   */
  withShouldEndSession (val: boolean): this

  /**
   * Helper method for adding directives to responses
   * @param {Directive} directive the directive send back to Alexa device
   * @returns {JargonResponseBuilder}
   */
  addDirective (directive: Directive): this

  /**
   * Returns a promise to the response object
   * @returns {Promise<Response>}
   */
  getResponse (): Promise<Response>
}

JargonSkillBuilder

JargonSkillBuilder installs onto the ASK skill builder, and handles all details of initializing the Jargon SDK, installing request and response interceptors, and so on.

const skillBuilder = new Jargon.JargonSkillBuilder().installOnto(Alexa.SkillBuilders.custom())

ResourceManager

Internally JargonResponseBuilder uses a ResourceManager to render strings and objects. You can directly access the resource manager if desired, for use cases such as:

  • obtaining locale-specific values that are used as parameters for later rendering operations
  • incrementally or conditionally constructing complex content
  • response directives that internally have locale-specific content (such as an upsell directive)
  • batch rendering of multiple resources
  • determining which variation the ResourceManager chose

You can access the resource manager through any of the following methods:

  • handlerInput.jrm
  • handlerInput.jargonResourceManager
  • handlerInput.attributesManager.getRequestAttributes().jrm
  • handlerInput.attributesManager.getRequestAttributes().jargonResourceManager