I made the following short video to demonstrate my initial prototyping work on the Lighting Path Alexa Skill:
I haven’t done too much with the Alexa skill yet, except to sort of start to chart out exactly how it is going to be structured.
One interesting aspect of it, is that when interacting with an Alexa, latency is an issue. In other words, people don’t like waiting (!) for a response.
So the initial work I did had to do with reducing latency by pre-generating the required code, and hosting it all on Amazon’s server infrastructure.
This is a type of meta-programming (i.e. writing software to generate other software) which I have always been interested in.
In this case, what I did was create an initial meta engine that basically interacts with Michael’s main LP server to get some quotes, and then generates some quickly executing code for to deployment to Amazon’s cloud infrastructure that doesn’t need to interact with the main LP server at all on each Alexa request.
Doing it this way reduces lot obviously on the main LP server, and also causes the Alexa to respond instantly to your LP requests.
(Note: my meta engine runs periodically, figures out if new content is available on the LP server, and if so, regenerates what is needed for deployment to the Amazon cloud.)
Often I don’t really know why I am necessarily doing what I am doing. I just sort of go with the flow, and approach what I am working on in an intuitive way.
But in retrospect, how I can see now that how architectured this was right on.
For sure it was very smart to approach it this way from the beginning, because it will make a big difference once the user base of the Lighting Path starts to grow.
We can use a similar architectural approach when we port this functionality to the Google Assistant, and within a few years, we will have lots of our users engaging with Lighting Path content via their smart speakers and in-dash consoles in their cars.
It is amazing how quickly the world is changing.