Build Better Customer Applications with Multiexperience Development

The promise of multiexperience applications is phenomenal. With multiexperience you can align and connect the optimal user experience for each user touchpoint through fit-for-purpose applications that make every interaction across the user journey effortless. Attend this session to see why Mendix is a leader in multiexperience development platforms. Simon Black, Team Lead Evangelist, and David Brault, Product Marketing Manager, demonstrate applications that utilize several modalities across a customer journey, including:

  • chatbots
  • augmented reality
  • voice assistants
  • progressive web applications
  • native mobile
  • TV applications
  • Transcript

    (upbeat music)

    <v ->Hello, and welcome to this session</v>

    on building multiexperience applications.

    My name is Simon Black, team lead for the Evangelists.

    Today I’m broadcasting from Elian, Cambridgeshire

    in the UK.

    And today I’m joined by Dave Brault,

    who will talk to us

    about what a multiexperience development platform is.

    Hi Dave, and can you introduce yourself?

    <v ->Thanks, Simon.</v>

    My name is David Brault,

    product marketing manager here at Mendix.

    A little bit about myself.

    I relocated to Austin, Texas about six years ago

    where I’m doing this broadcast live

    and I’m fully assimilated now

    to the point where I spend way too much time

    on the hunt for the best barbecue in Texas.

    Anyway, let’s get back to why you’re here,

    building better customer applications

    with multiexperience development.

    During this session, we’re going to take a look

    at how MXDP is changing the development landscape

    and how Mendix can help.

    Then we’ll demonstrate

    several different kinds of experiences

    and let Simon pull back the covers

    so we can see how they were built.

    Let’s start off with a quote.

    “Good design is actually a lot harder to notice

    than poor design,

    in part because good designs fit our needs so well

    that the design is invisible.”

    And I absolutely love this quote

    because it’s so on point with the topic at hand.

    The primary purpose of MXDP or

    multiexperience development platforms

    is to enable companies to create applications

    that deliver sophisticated user experiences

    across many devices and modalities

    like the ones you see on the screen right now.

    Now the promise of multiexperience it’s phenomenal.

    Connect the optimal user experience

    to each customer touchpoint

    with fit for purpose applications

    that make every user interaction efficient and effortless,

    or invisible, as Don would say.

    Now, the Mendix platform delivers on this promise

    with a truly integrated solution

    that uses a single skillset

    to build rich and engaging applications

    and experiences for any situation.

    Let’s take a deep dive

    into how the platform supports

    the rapid development of multiple experience solutions.

    At the foundation layer, the platform services

    and cloud native architecture of Mendix

    does all the heavy lifting.

    It handles the complexity of dealing

    with loosely coupled applications

    and services running on a service base architecture.

    It also handles all the core services

    like logging and security and backup.

    Mendix applications and services are completely portable,

    which means it can be moved or distributed

    across cloud providers at will.

    Now at the next level

    integration to any service or data source

    can be consumed and published inside Studio Pro.

    So that’s REST, SOAP, OData, SQL, JSON, XML,

    even proprietary sources, all with no coding.

    They can be packaged up

    as re-usable connectors inside of Data Hub

    or used in app services published in the Mendix marketplace.

    App services, what they do, they combine UI,

    building blocks, widgets, logic connectors, and services

    into packaged business capabilities,

    which can be used for building experiences

    higher up in the development chain.

    Now with Data Hub, these same reusable connectors

    can be exposed as virtualized data entities

    in a searchable catalog, which is great

    because now any developer

    has the ability to access rich metadata

    and equally important with built in governance

    and security access.

    Now the peak of this pyramid,

    developers standard atop a mountain of technology

    abstracted in reuse that allows them to focus

    on designing compelling user experiences.

    Which means development

    is no longer constrained by technology.

    Okay, so now that you’ve seen the architecture,

    let’s see what multiexperience applications look like

    and how they’re built.

    Now for the rest of this session,

    we’re going to follow a customer

    through a journey of buying a car,

    from ordering it, to getting it delivered

    with a small hiccup along the way.

    Let’s start with researching and buying the car,

    which involves a combination of progressive web apps,

    chatbots, augmented reality,

    and leveraging native mobile applications

    and their device features all built with Mendix.

    The customer journey begins here

    with this progressive web app.

    It’s responsive so it runs on any form factor

    and it’s fast because most of the application

    runs locally on the device.

    Now instead of calling or emailing the dealership,

    the next experience is an inline chatbot

    to schedule a test drive for 3:00 p.m.

    Using a combination of both typing

    and voice-to-text capabilities.

    Now, after the test drive,

    the customer uses augmented reality to configure their car

    by overlaying different paint and wheel colors

    because dealerships rarely stock

    every single color combination.

    Deciding to buy the car, the customer harnesses

    the power of the phones location services

    to populate their address

    and uses a credit card scanner

    to populate their credit card details all without typing.

    And last in between commercial breaks,

    the customer uses a native TV application

    to check on the status of their car

    as it moves through the various stages

    of the manufacturing process.

    So at this point in time,

    I’m going to let Simon share with us

    how he used Mendix to build some of these experiences.


    <v ->Thanks, Dave.</v>

    In my sections, I’ll be taking a deeper dive

    into how those experiences

    were built using the Mendix platform.

    In this section, we’ll cover

    how we built the chatbot experience using AWS Lex,

    how we built out the AR experience

    using our react native platform,

    and finally how we built out the experience

    for our Fire TV Stick up.

    So let’s a look at how those experiences are built.

    First of all, inside this progressive web application,

    we can purchase a number of vehicles,

    but also ask certain questions

    using this chatbot feature here.

    This particular chatbot is using AWS Lex

    as its chatbot engine.

    And we can configure it to use a number of dialogues

    and understand what our customer is asking it.

    We can also add certain context data

    from our Mendix application.

    The way we train those bots is using the bot trainer

    inside the AWS Lex interface here.

    And all bots work in a similar manner.

    You build them using intents, slots and entities.

    An intent is something that you want the bot to do.

    And here we have a number of bots that we’ve created,

    but we have this scheduled test drive,

    the one we showed in that video earlier.

    So here we can see we have an intent to make an appointment,

    and with this, we have to give it a number of utterances.

    Essentially, an utterance is a sentence,

    an example sentence that we wanna train this bot on.

    It will recognize those patterns

    and trigger an action based on this particularly intent.

    We can also pick out certain key data

    from that particular utterance.

    So things like the booking type, the time

    and also the car that we wanna book for.

    So all chatbots work in a very similar manner

    and we’ll show you more

    as we go through this demonstrations.

    So, first we can see here we have the booking type,

    the car model, date/time,

    and these are stored in slot values.

    These are the things that we want to keep and store

    inside our application.

    Because the chatbot is very dumb.

    It doesn’t actually store any data,

    it simply acts on certain information

    and sends that back to the requester.

    So let’s go ahead and have a look

    at how that is built inside the Mendix model.

    So here we have the Mendix model for our application

    and we first have our progressive web app

    where we can see the different details.

    And we also have a microflow

    that is being used to send the data

    to that particular Lex service.

    Now, this is using the AWS Lex connector

    available in the Mendix app store.

    And inside that particular connector,

    you can set up the keys and identities

    as well as the utterance

    that we’re gonna send to this particular chatbot.

    And as I said, that utterance is like a message.

    So it will interpret that message

    and come back with a response.

    And inside that response will be a number of slots.

    Those are how we store the actual values,

    the information like the time, the date,

    and also the car type.

    And we’re storing those inside the Mendix application

    so that we can remember that conversation

    and we can also create that booking.

    Inside the UI of this application for this chatbot,

    we’ve just simply used a list view.

    And inside that list view,

    we can showcase all of those messages we’ve sent

    and also received from that chatbot.

    So very quickly, that’s an overview

    as to how we’ve built that integration.

    Let’s take a look at our next integration,

    which was to build an AR experience.

    So to do so, we actually used a library

    that’s available for React Native.

    This is called ViroReact.

    And ViroReact allows you to create VR and AR experiences

    leveraging the AR capabilities of the device,

    whether that be ARCore or ARKit.

    And by using this,

    we can actually start to build those visualizations.

    And the way we did this is inside the modeler,

    we built out some custom widgets.

    Inside these widgets,

    they allow us to set up certain markers to track.

    So here we have a tracking widget.

    And inside this particular tracking widget,

    we can set a particular image we wanna use to identify

    and place the object in a 3D space.

    So here we can see, we can select the image.

    And in this case, we’re gonna use a Mendix logo.

    This has some unique characteristics of it,

    so that we can easily identify it in the 3D space.

    We can also set some of the properties such as actions

    to be triggered when we detect certain items

    in the 3D space.

    Inside this pluggable widget,

    we then have a number of additional widgets

    to show the objects, and in this case the object is a car

    and a number of spheres

    and the spheres are the icons we saw

    at the top of that particular car to change the color.

    If we drill down into the object,

    we can select the material that is being used,

    we can choose the interaction

    and also the events that are used

    when we actually interact with this particular item.

    So when the tracker detects that particular marker,

    it will take this particular object

    and place it in the 3D space.

    We can then interact with it, walk around it,

    and we can get more information from it as well.

    So it’s a really powerful way of being able to preview

    and look at certain goods like a car

    or another product such as the light bulb,

    being able to interact with it

    without actually purchasing it beforehand.

    So let’s move on to our next experience.

    The last one that we showed was a TV application

    running on a Fire TV Stick.

    And actually this particular interaction

    and this particular device is very easy to integrate into.

    And this is because the application is built using Android.

    So all applications that are deployed onto a Fire TV Stick

    run on the Android platform.

    And because the Mendix Make It Native application

    deploys onto Android,

    we can simply install it onto the Fire TV Stick.

    And to do so, we just need to use this guide here.

    This guides uses ADB, which is the Android Debugging System,

    which allows you to connect to a device

    on your local network and install certain applications.

    So all we did is make our Fire TV Stick

    available on our network

    and using a few commands,

    we could install it onto that particular device.

    Now, the way we built that particular application

    isn’t anything fancy.

    All we needed to do is build out a native application

    in a separate app here.

    And here we have the carousel that we saw earlier.

    So the user was able to see pictures

    as to what stage of production they were in,

    and they can swipe through those using their buttons

    on their particular Fire TV Stick.

    There was one thing that we did need to change though.

    The Fire TV Stick runs on a particular TV,

    and this is a landscape view.

    We needed to make sure that

    instead of opening up the application in portrait,

    we had to open it up in landscape.

    So to do so it’s very easy.

    Inside the application we have a option

    to be able to configure it to the screen orientation.

    This is the code behind the native app

    that we built for this particular Fire TV Stick up.

    And this is based on the base template

    available from the platform.

    And all we’ve configured is this option here

    to switch it from portrait to landscape.

    This is to ensure that when we open up the application,

    it doesn’t first open up in portrait

    and then flip it to landscape.

    It makes sure that we open it up

    and it goes in landscape first.

    So by changing a few configurations like this,

    it gives us another device profile,

    another experience for the user

    that we might not have even considered beforehand.

    So in the last few minutes,

    we’ve seen a demonstration of how we can use AWS Lex

    to communicate with a chatbot.

    We’ve then seen an AR experience on our native mobile,

    and the final experience is we saw a TV application

    using the nature platform from Mendix

    and deploying onto a Fire TV Stick up.

    So, I’m gonna handover back to Dave now

    who’s gonna take you on the next journey

    of our customer journey.

    <v ->Alright, Simon, thanks.</v>

    Very impressive.

    Let’s move on to the next phase

    where the customer finds out

    that there’s an issue with their order,

    and now they need to speak to an agent to get it resolved.

    So in this next experience,

    the customer receives an automated text message

    and a push notification

    that a manufacturing delay is required

    and it needs their immediate attention.

    So what they do, they decide to call customer service

    and they’re actually greeted by a virtual agent.

    Let’s listen in.

    <v Virtual Agent>Hello there,</v>

    welcome to Tesla customer care.

    How can I help you today?

    <v ->I’d like to know the status of my order.</v>

    <v Virtual Agent>No problem.</v>

    Can you tell me your order number

    so I can check that for you?

    <v ->1086.</v>

    <v Virtual Agent>Your order 1086</v>

    is currently in the chassis manufacturing stage

    and will be moving onto body manufacturing soon.

    Would you like me to push you through

    to a member of our team who can help you further?

    <v ->Yes, please.</v>

    <v Virtual Agent>Not a problem at all.</v>

    I’m connecting you now to a member of our team

    who has been briefed on your details, please hold.

    (upbeat music)

    (dialing tone beeps)

    <v ->Hi, Mr Black.</v>

    This is Tesla customer care.

    My name is Allister, how can I help you today?

    <v ->So the virtual agent</v>

    successfully gathered all the information

    required to route the caller to the appropriate person

    and prepare that employee for the call.

    Now, from there, the agent was able to resolve the issue.

    So at this point, I’ll let Simon take control again

    so he can show you how to build

    a virtual agent application with Mendix.

    Simon, back to you.

    <v ->Thanks, Dave.</v>

    What we saw there is a customer interacting with a bot

    using voice recognition.

    This particular bop was trained

    using the Twilio dialogue service.

    Inside Twilio, we can train and build a number of tasks.

    These tasks are like intents,

    which we saw in our AWS Lex interface.

    From here, we can train it on a number of samples

    and these samples are like utterances,

    the same as we had inside our AWS Lex interface.

    Sample words and sentences that we want to trigger.

    Inside each of these,

    we also have the ability to program what happens

    when these particular key words and sentences are triggered.

    And then in this case, we’re doing a redirect to a URL.

    And this URL is a service hosted on a Mendix application.

    So all we’ve done is we’ve published a REST API

    from the Mendix application,

    which will get called and executed

    when these particular sentences is issued.

    So let’s switch into the model now

    and see how that experience is built out.

    Inside the model of this application,

    we can see here, we have this REST API

    that has been published.

    Inside that particular API call, we have a microflow.

    And this mic flow is executed

    every time we get that API call.

    In this particular microflow, we have a number of steps,

    which is picking up your current task, the information

    and then finally, it’s making a lot of decisions

    around where it should direct the customer

    based on the certain input that it is getting.

    Now we could have created multiple different API endpoints

    depending on the different type of interaction,

    but we wanted one central microflow

    so we could show you

    the complexity of logic that’s going on

    behind the scenes in the Mendix application.

    So in this case,

    it’s detecting whether a redirect is needed or not.

    And if a redirect is needed, what it will do

    is it will then send a customer response back to Twilio

    to redirect them to a certain number.

    So in our scenario, we were redirected to Allister

    in the customer services team,

    who was able to then help us and start to fix the issue.

    And to do that, we actually submitted back some XML.

    This XML structure defines what phone number

    we need you to dial to actually talk to the customer.

    And we can do all the things inside this XML.

    This is a very common structure.

    AWS uses a similar structure where you can embed it

    with more content rich information,

    things like phone numbers, pictures, audios, and so on.

    For the other messages, we just simply use plain text

    to interact with those.

    So we’ve seen in the last few minutes,

    an overview as to how we dealt with those conversations.

    We used autopilot from Twilio

    to be able to handle those conversations

    and recognize those key utterances,

    and then back those to Mendix to get the key information

    such as the status of the order and other information.

    So let’s hand back to Dave now

    for our final part of the customer journey.

    <v ->Thanks, Simon.</v>

    During the last part of the buyer’s journey,

    we’ll take a look at a couple of different experiences used

    while the car is out for delivery.

    Let’s pick up with the customer asking Alexa

    for a status update.

    <v ->So Alexa ask connected car for the status of my order.</v>

    <v Alexa>If you let me know your order number,</v>

    I can look up the status for you.

    <v ->1086.</v>

    <v Alexa>Your car has been built</v>

    and is out for delivery.

    It will be with you by 3:13 p.m.

    <v ->Now when the driver arrives at the customer,</v>

    they use a native mobile application

    to walk through a checklist to release the car.

    Native apps are perfect

    for when workers need to interact

    with customers face to face.

    They can capture photographic proof

    of a successful delivery,

    or unfortunately catalog any damages

    so that a problem can be resolved as quickly as possible.

    The native app eliminates paper based processes

    by digitally capturing all this information,

    including the customer’s signature

    once they’re satisfied with the delivery.

    Okay, for one last time I’ll pass control to Simon.

    He’s going to show you how he used Mendix

    to build the Alexa app and the native mobile application.

    So Simon, back over to you.

    <v ->Thanks, Dave.</v>

    In this next section, we’ll take a look

    how we built that integration into our Alexa device.

    To build an integration into Alexa,

    you first need to build a skill.

    A skill is like an app on the apps tool

    but it’s personalized for Alexa.

    It uses voice rather than touch for interaction.

    Here, we have a skill that we’ve created

    for our connected car journey.

    And if we drill down on it, we can start to configure it

    to meet our particular needs.

    Inside here, we have, first of all, an invocation word.

    This is the key word or skill name

    that you want to give to this particular skill.

    And this will get triggered

    when you ask Alexa to do something.

    Next, we have the interaction model.

    And again, you can see some very similar principles here

    to how we were doing with the AWS Lex.

    We can use certain intents, train them on certain utterances

    and pick up certain slots.

    So here we can see the particular dialogue

    for our status for our order.

    We can give it some utterances,

    some slot data that we want to capture,

    and that can then get triggered

    inside our Mendix application.

    So really as long as you know

    how to build one type of chatbot or a chat interface,

    you can very easily switch to other type of platforms.

    They have some slot tool differences,

    but you can see there’s a lot of similarities

    across them.

    Inside the Alexa information here,

    we can also set an end point.

    And the end point is where we’re actually

    going to get that data from.

    So when we trigger a certain intent,

    we then want to be able to process it

    using this Mendix application.

    So let’s go inside that model

    and take a look at how that is implemented.

    So we open up the same model as our Twilio example.

    We can then start to see the information

    from our Alexa device.

    So here we actually register certain handles

    for those intents.

    So in the after startup flow,

    when we start up our application,

    we can then trigger and set certain information and intents

    to get triggered

    when we actually see this particular intent.

    So in this case, when we see the intent status,

    it will trigger this particular microflow.

    Now the one we showed in the example

    was for a status for our order.

    And inside this particular microflow,

    we can see we can get the information about the request

    and we can also get information from the slots,

    so the actual data that we wanna capture.

    And in this case it’s the actual order number

    that’s important.

    We wanna be able to capture who’s order it is,

    look it up in the Mendix application

    and respond to that particular chatbot and to Alexa

    with the information that we need.

    So here we can see, we have a check

    to see whether the order number was found or not.

    And if it is found,

    we will respond with a certain message to it.

    So here we can see we have a conditional options

    based on the status of our order.

    So if the order is delayed,

    then we will send a certain message to them.

    If it’s in finalization or manufacturing,

    we’ll send a different message.

    So you can really customize those experiences

    and those messages you provide back to your users.

    And again, like we had for Twillio,

    you can respond with plain text or with SSML.

    And SSML is an XML format structure

    which allows you to embed audio, images

    and additional information to your Alexa device

    because some of extra devices have screens.

    So if you look at the Alexa show,

    you can actually show information

    and also play audio at the same time as well.

    So it’s a really easy to use connector.

    You can simply download this connector

    into your application,

    and this is actually using the AWS SDK for Java

    and uses those services and APIs

    to be able to communicate with them.

    So let’s take a look at our next experience,

    which is how did we build out the native application

    for our field service delivery drivers?

    And again, this was a different application

    and different module.

    And what we tended to do for all of these experiences,

    we tried to break them out

    into the smaller application as possible

    and share data across them.

    This is a really key point about the MXDP

    is that your user should be able

    to seamlessly go through different applications,

    but also different experiences.

    So inside this model, we’re actually using data

    that’s exposed from those systems from Data Hub.

    And there’s some sessions

    that are gonna be covering what Data Hub is.

    But inside this model, you’ll see three colored entities.

    The first entity is these gray entities here.

    And these are known as virtual entities.

    These are not stored in the Mendix application.

    These are simply retrieved from the source system,

    whether it be an OData, SQL, GraphQL.

    The idea is that these can be queried dynamically

    on any page or any interface

    whether it be on native or web.

    And this allows you to combine that data together

    to build new experiences

    and share data across different modalities

    and different experiences.

    So inside this application,

    it was a very straightforward native mobile application.

    We had some experiences

    where we could view the next appointments,

    we could see the tasks that we needed to be completed.

    But some of the interesting items

    were things like we could do barcode scanning

    to be able to check

    that the VIN number was correct for the car,

    as well as being able to do native signatures.

    So using a signature widget,

    we can interact with that particular user

    and get them to confirm

    that they have received that particular vehicle.

    So in the last few minutes, I’ve gone through very quickly

    some of the experiences that we’ve built

    using the Mendix platform

    and given you a flavor of what is really possible

    when you push Mendix to the edge

    and be able to leverage it to its full complexity

    and full potential.

    I’ll now hand over to Dave to give our final remarks

    and wrap up this particular session.

    <v ->Thanks Simon, great job as usual.</v>

    So during the last 20 minutes,

    we’ve demonstrated

    what a multiexperience customer journey can look like

    if you use Mendix,

    we utilize PWAs, chatbots, virtual agents,

    native mobile apps, augmented reality,

    TV apps, virtual assistants,

    even an Alexa conversational application.

    So this, my friend

    is what the future of development looks like.

    So moving forward, you’re gonna go way beyond

    just your typical web and mobile applications.

    In fact, Gartner predicts that by 2024,

    one out of three enterprises will use an MXDP

    to accelerate the speed of IT and business fusion teams

    to deliver successful digital products.

    And Mendix is here to help.

    Gartner selected us as a leader

    in the multiexperience development category

    and we really Excel at delivering

    these types of applications at speed and scale.

    For example, Mendix is the only leader in the magic quadrant

    that supports all four mobile architectures.

    And we’re the only one that supports native,

    which allows you to deliver the best application

    for every single situation.

    Also multiexperience is much more than just web and mobile.

    And we can help you deliver additional experiences

    like immersive, conversational and recreational.

    So besides thanking Simon

    for all of his great work during the demos,

    I want to leave you with this final thought today.

    To build great experiences,

    the end user must be at top of mind.

    And the goal is to deliver applications

    that are so effortless or invisible, as Don would say,

    that they don’t even realize

    the technology that they’re using.

    So make Don proud and start building

    some multiexperience applications today.

    So with that, I’d like to say thanks for attending

    and have a great day

    (upbeat music)