[go: nahoru, domu]

Skip to content

anshuhan/data-science-on-aws

Repository files navigation

O'Reilly Book Coming Early 2021

Data Science on AWS

YouTube Videos, Meetups, Book, and Code: https://datascienceonaws.com

Data Science on AWS

Workshop Description

In this workshop, we build a natural language processing (NLP) model to classify sample Twitter comments and customer-support emails using the state-of-the-art BERT model for language representation.

To build our BERT-based NLP model, we use the Amazon Customer Reviews Dataset which contains 150+ million customer reviews from Amazon.com for the 20 year period between 1995 and 2015. In particular, we train a classifier to predict the star_rating (1 is bad, 5 is good) from the review_body (free-form review text).

Workshop Cost

This workshop is FREE, but would otherwise cost <25 USD.

Workshop Cost

Workshop Agenda

Workshop Agenda

Workshop Contributors

Workshop Contributors

Workshop Instructions

1. Create TeamRole IAM Role

IAM

Roles

Create Role

Select Service

Select Policy

Add Tags

Review Name

2. Launch an Amazon SageMaker Notebook Instance

Open the AWS Management Console

Back to SageMaker

In the AWS Console search bar, type SageMaker and select Amazon SageMaker to open the service console.

Notebook Instances

Create Notebook Part 1

In the Notebook instance name text box, enter workshop.

Choose ml.t3.medium (or alternatively ml.t2.medium). We'll only be using this instance to launch jobs. The training job themselves will run either on a SageMaker managed cluster or an Amazon EKS cluster.

Volume size 250 - this is needed to explore datasets, build docker containers, and more. During training data is copied directly from Amazon S3 to the training cluster when using SageMaker. When using Amazon EKS, we'll setup a distributed file system that worker nodes will use to get access to training data.

Fill notebook instance

In the IAM role box, select the default TeamRole.

Fill notebook instance

You must select the default VPC, Subnet, and Security group as shown in the screenshow. Your values will likely be different. This is OK.

Keep the default settings for the other options not highlighted in red, and click Create notebook instance. On the Notebook instances section you should see the status change from Pending -> InService

Fill notebook instance

While the notebook spins up, continue to work on the next section. We'll come back to the notebook when it's ready.

3. Update IAM Role Policy

Click on the notebook instance to see the instance details.

Notebook Instance Details

Click on the IAM role link and navigate to the IAM Management Console.

IAM Role

Click Attach Policies.

IAM Policy

Select IAMFullAccess and click on Attach Policy.

Note: Reminder that you should allow access only to the resources that you need.

Attach Admin Policy

Confirm the Policies

Confirm Policies

4. Start the Jupyter notebook

Note: Proceed when the status of the notebook instance changes from Pending to InService.

Back to SageMaker Notebooks

Start Jupyter

5. Launch a new Terminal within the Jupyter notebook

Click File > New > [...scroll down...] Terminal to launch a terminal in your Jupyter instance.

6. Clone this GitHub Repo in the Terminal

cd ~/SageMaker && git clone https://github.com/data-science-on-aws/workshop

Within the Jupyter terminal, run the following:

cd ~/SageMaker && git clone https://github.com/data-science-on-aws/workshop

7. Navigate Back to Notebook View

8. Start the Workshop!

Navigate to 01_setup/ in your Jupyter notebook and start the workshop!

You may need to refresh your browser if you don't see the new workshop/ directory.

Start Workshop

About

AI and Machine Learning with Kubeflow, Amazon EKS, and SageMaker

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 85.6%
  • Python 9.9%
  • HTML 1.7%
  • Scala 1.4%
  • Java 0.9%
  • Shell 0.2%
  • Other 0.3%