We currently expose five primary classes:
We make them available to:
BSTAlexa is our Alexa emulator. It allows one to write unit tests and functional tests that mimic the functionality of the Alexa service.
The LambdaServer makes it easy to run your Lambdas locally for unit and functional tests.
BSTProxy allows our proxy tool to be used programmatically. An example is here.
BSTEncode encodes audio files to Alexa standards and makes them available via S3.
Logless makes logging and diagnostics for Alexa skills and Lambdas super-simple.
Below is a simple example Mocha test.
This can be used to test any Alexa skill, not just one written in JavaScript:
it('Plays and Goes To Next', function (done) {
alexa.spoken('Play Music', function(error, response) {
// Confirms the correct directive is returned when the Intent is spoken
assert.equal(response.response.directives[0].type, 'AudioPlayer.Play');
// Ensures the track with correct token is returned
assert.equal(response.response.directives[0].audioItem.stream.token, '1');
alexa.intended('AMAZON.NextIntent', null, function (error, response) {
// Ensures the track with next token is returned
assert.equal(response.response.directives[0].audioItem.stream.token, '2');
done();
});
});
});
We initialize the BSTAlexa in the beforeEach block, like so:
let alexa = null;
beforeEach(function (done) {
alexa = new bst.BSTAlexa('http://localhost:10000',
'./speechAssets/IntentSchema.json',
'./speechAssets/Utterances.txt');
alexa.start(function () {
done();
});
});
And we cleanup:
afterEach(function (done) {
alexa.stop(function () {
done();
});
});
We can the utilize the LambdaServer to automatically start and stop a NodeJS/Lambda-based skill within our test.
To do this, simply start the LambdaServer on an open port and point it at your Lambda file:
let server = new bst.LambdaServer('./lib/index.js', 10000, true);
server.start();
The last parameter, true, enables verbose debugging. This prints out all the requests and responses from the skill to the console.
This will typically reside within our beforeEach block, similar to the the BSTAlexa initialization:
beforeEach(function (done) {
server = new bst.LambdaServer('./lib/index.js', 10000, true);
alexa = new bst.BSTAlexa('http://localhost:10000',
'./speechAssets/IntentSchema.json',
'./speechAssets/Utterances.txt');
server.start(function() {
alexa.start(function () {
done();
});
});
});
And we shut it down at the end like so:
afterEach(function(done) {
alexa.stop(function () {
server.stop(function () {
done();
});
});
});
It is very important to shutdown the server - otherwise it will go on listening on the port specified!
Listeners can be set on all the events listed here.
For example, to see that a track has begun playing, add a listener on the AudioPlayer.PlaybackStarted event.
This works well in concert with the audioItemFinished call.
This call acts as if the current track had finished playing on the device.
The next audio item queued from your skill should then be started.
Sample code:
alexa.on('AudioPlayer.PlaybackStarted', function(audioItem) {
assert.equal(audioItem.stream.token, '2');
});
alexa.audioItemFinished();
Generated using TypeDoc