Build a Google Assistant with Interactive Canvas and React
Part 3
Welcome to the final part of this tutorial. Here, I’ll be covering how to pass params from your bot to your ReactJS app and also adding onClicks to your app!
By the end of this, I hope that you’ll be able to deploy and interact with your New York Times Bestseller bot properly — from the clicking to tracking session params.
If you’ve not yet seen the first 2 parts, give them a look and try it out because the 3rd part will be building on what was already done!
Part 1:
Part 2:
Assuming you’ve followed along, maybe even created your own pages and design, you’ll likely end up with this flow:
As you can tell, we still have lorem ipsums
and placeholder data everywhere.
So let’s start with that first.
Sending and Tracking Session Variables as Parameters to the ReactJS App
As I won’t be covering how to connect APIs to your bot, we will be creating some mock data for our usage.
Setting Up Mock Data
I’ll be adding the dummy data inside sdk
> webhooks
> ActionsOnGoogleFulfillment
> index.js
.
Regarding the book cover images, there are many ways to store them, and it’s up to you to explore whichever way you’d like. Starting from the easiest:
- Just add the images to your
public
folder and call → note that this would mean you call the images in yoursrc
folder, not in yoursdk
folder - Upload it to some free image hosting site like ImgBB and use the HTML URL generated
- Set up your Firebase Storage with this project, store your photos in there and call (but note the restrictions like 10GB of storage until it starts to cost you)
I’ll be using ImgBB but you can simply just replace the URLs with whichever method you’d like.
Above is a snippet of what my data will look like, the full version is in my repository.
I have a short summary and a long summary so that the bot will read out the short summary, and the long summary is displayed on the screen.
Once you have your data set-up, you can now add logic so that when your conversation enters an intent, the bot knows what to do with it and to show it on the screen.
Sending Params to the React App
When the user says “Look at Fiction List” from the Welcome
scene, we want to only choose the Fiction List and parse it to the front-end.
Note 3 things that are done here:
conv.session.params.book_categories_options
is used to let us know which category has been selected by the user. This is linked to the slot-filling that was done in theWelcome
scene.handleTypeOverride
usesconv.session.typeOverrides
to override thebooks_options
Types
we initially listed on our Actions on Google console. Here, we need to list the entries inside the type with theirname
andsynonyms
. This way we don’t have to manually update theTypes
whenever we have new data.new Canvas
hasparams
passed throughdata
. Theparams
is what the React app will take to update the screens.
For the book_details
webhook, we don’t need to override the types but we’ll need to know which book was chosen by the user and pass the details to the React app through params
again.
Now that you have the webhook settled, let’s see how we can use it inside our ReactJS app.
Using the Params from the BE in our ReactJS App
If you recall from Part 2, our Canvas.js
has this:
Notice the dataEntry.params
that is called here. We will be using this to set this.state.params
and pass the params
into whichever components we will need.
And that’s it! Simply repeat the same thing for your other components/pages (if you need params in them) and you’ve now successfully passed data from the bot to your ReactJS App.
The final thing that I’ll be covering (and it’s optional to have), would be adding onClicks in your application so that the user can tap the screen to trigger an intent.
Adding onClicks in your ReactJS App
Let’s take the example again of the user tapping the between the Fiction and Non-Fiction buttons in the Welcome
scene.
All we have to do is to:
- Add onClicks on the Button itself
- Make use of
interactiveCanvas.sendTextQuery(query)
And that’s it!
Wrap-Up
If you’ve followed through the tutorial, here’s how it should look like!
And if you require my repository, here it is:
I’ve broken up the parts into their own branches so that you can look at what you need.
There’s of course the part where you can deploy your bot for testers or even public usage. But I believe that should be easy enough to navigate!
Let me know what you think in the comments!
I’m sure as Google improves their products, many things might change, and hey, you might even be able to find better ways to write things! But hopefully this will be your first step into Interactive Canvas and exploring more of what it can do!
All the best :)
More content at plainenglish.io