At AWS, we want to put machine learning in the hands of every developer. For example, we have pre-trained AI services for areas such as computer vision and language that you can use without any expertise in machine learning. Today we are making another step in that direction with the addition of a new Predictions category to the Amplify Framework. In this way, you can add and configure AI/ML uses cases for your web or mobile application with few lines of code!
AWS Amplify consists of a development framework and developer services that make super easy to build mobile and web applications on AWS. The open-source Amplify Framework provides an opinionated set of libraries, user interface (UI) components, and a command line interface (CLI) to build a cloud backend and integrate it with your web or mobile apps. Amplify leverages a core set of AWS services organized into categories, including storage, authentication & authorization, APIs (GraphQL and REST), analytics, push notifications, chat bots, and AR/VR.
Using the Amplify Framework CLI, you can interactively initialize your project with amplify init
. Then, you can go through your storage (amplify add storage
) and user authentication & authorization (amplify add auth
) options.
Now, you can also use amplify add predictions
to configure your app to:
You can select to have each of the above actions available only to authenticated users of your app, or also for guest, unauthenticated users. Based on your inputs, Amplify configures the necessary permissions using AWS Identity and Access Management (IAM) roles and Amazon Cognito.
Let’s see how Predictions works for a web application. For example, to identify text in an image using Amazon Rekognition directly from the browser, you can use the following JavaScript syntax and pass a file object:
Predictions.identify({
text: {
source: file
format: "PLAIN" # "PLAIN" uses Amazon Rekognition
}
}).then((result) => {...})
If the image is stored on Amazon S3, you can change the source to link to the S3 bucket selected when adding storage to this project. You can also change the format to analyze a scanned document using Amazon Textract. Here’s how to extract text from a form in a document stored on S3:
Predictions.identify({
text: {
source: { key: "my/image" }
format: "FORM" # "FORM" or "TABLE" use Amazon Textract
}
}).then((result) => {...})
Here’s an example of how to interpret text using all the pre-trained capabilities of Amazon Comprehend:
Predictions.interpret({
text: {
source: {
text: "text to interpret",
},
type: "ALL"
}
}).then((result) => {...})
To convert text to speech using Amazon Polly, using the language and the voice selected when adding the prediction, and play it back in the browser, you can use the following code:
Predictions.convert({
textToSpeech: {
source: {
text: "text to generate speech"
}
}
}).then(result => {
var audio = new Audio();
audio.src = result.speech.url;
audio.play();
})
Available Now
You can start building you next web or mobile app using Amplify today by following the get-started tutorial here and give us your feedback in the Amplify Framework Github repository.
There are lots of other options and features available in the Predictions category of the Amplify Framework. Please see this walkthrough on the AWS Mobile Blog for an in-depth example of building a machine-learning powered app.
It has never been easier to add machine learning functionalities to a web or mobile app, please let me know what you’re going to build next.
— Danilo
Source: AWS News