How to Scrape Amazon Data: Products, Pricing, Reviews, etc.


2021-10-19 - 6 min read

Nicolae Rotaru
Nicolae Rotaru

Introduction

Amazon.com is a vast Internet-based enterprise that sells all kinds of goods either directly or as the middleman between other retailers and Amazon.com’s millions of customers.


In this article, you will read about the easiest way to scrape Amazon products and reviews with Page2API.


You will find code examples for Ruby, Python, PHP, NodeJS, cURL, and a No-Code solution that will import Amazon products into Google Sheets.


Amazon scraping can be very useful if you want to accomplish such tasks as:

  • competitor analysis
  • improving your products and value proposition
  • identifying market trends and what influences them
  • price monitoring


Luckily, Amazon.com is a website that is pretty easy to scrape if you have the right tools.

For this purpose, we will use Page2API - a powerful and delightful API that makes web scraping easy and fun.


In this article, we will learn how to:

  • Scrape Amazon products
  • Scrape Amazon reviews

Prerequisites

To perform this simple task, you will need the following things:


  • A Page2API account
    The free trial offers a credit that covers up to 1000 web pages to scrape, and it takes under 10 seconds to create the account if you sign up with Google.

  • A product or a category of products that we are about to scrape.
    In our case, we will search for 'luminox watches'
    and then scrape the reviews for a random product.

How to scrape Amazon products

First what we need is to type 'luminox watches' into the search input from amazon's search page.


This will change the browser URL to something similar to:

  
    https://www.amazon.com/s?k=luminox+watches


The URL is the first parameter we need to perform the scraping.


The page that you see must look like the following one:

Amazon results page

If you inspect the page HTML, you will find out that a single result is wrapped into a div that looks like the following:

Amazon result

The HTML for a single result element will look like this:

Amazon result breakdown

Now, let's handle the pagination.
There are two approaches that can help us scrape all the needed pages:

1. We can iterate through the pages by clicking on the Next page button
2. We can scrape the pages using the batch scraping feature

Let's take a look at the first approach.


In our case, we must click on the Next → button while the list item's class will be active:

  
    document.querySelector('.s-pagination-next').click()
  

Next button is enabled

And stop our scraping request when the Next → button became disabled.

In our case, a new class (.s-pagination-item.s-pagination-next.s-pagination-disabled) is assigned to the list element where the button is located:

Next button is enabled
The stop condition for the scraper will be the following javascript snippet:
  
    document.querySelector('.s-pagination-item.s-pagination-next.s-pagination-disabled') !== null
  

Now it's time to prepare the request that will scrape all products that the search page returned.

The following examples will show how to scrape 3 pages of products from Amazon.com

If we decide to go with the next button approach, our payload will look like:

  
    {
      "api_key": "YOUR_PAGE2API_KEY",
      "url": "https://www.amazon.com/s?k=luminox+watches",
      "real_browser": true,
      "merge_loops": true,
      "scenario": [
        {
          "loop" : [
            { "wait_for": "[data-component-type=s-search-result]" },
            { "execute": "parse" },
            { "execute_js": "document.querySelector('.s-pagination-next').click()" }
          ],
          "stop_condition": "document.querySelector('.s-pagination-item.s-pagination-next.s-pagination-disabled') !== null",
          "iterations": 3
        }
      ],
      "parse": {
        "watches": [
          {
            "_parent": "[data-component-type='s-search-result']",
            "title": "h2 >> text",
            "link": ".a-link-normal >> href",
            "price": ".a-price-whole >> text",
            "stars": ".a-icon-alt >> text"
          }
        ]
      }
    }
  

Code examples (next button approach)

      
    require 'rest_client'
    require 'json'

    api_url = 'https://www.page2api.com/api/v1/scrape'

    # The following example will show how to scrape 3 pages of products from Amazon.com

    payload = {
      api_key: 'YOUR_PAGE2API_KEY',
      url: "https://www.amazon.com/s?k=luminox+watches",
      real_browser: true,
      merge_loops: true,
      scenario: [
        {
          loop: [
            { wait_for: "[data-component-type=s-search-result]" },
            { execute: "parse" },
            { execute_js: "document.querySelector('.s-pagination-next').click()" }
          ],
          stop_condition: "document.querySelector('.s-pagination-item.s-pagination-next.s-pagination-disabled') !== null",
          iterations: 3
        }
      ],
      parse: {
        watches: [
          {
            _parent: "[data-component-type=s-search-result]",
            title: "h2 >> text",
            link: ".a-link-normal >> href",
            price: ".a-price-whole >> text",
            stars: ".a-icon-alt >> text"
          }
        ]
      }
    }

    response = RestClient::Request.execute(
      method: :post,
      payload: payload.to_json,
      url: api_url,
      headers: { "Content-type" => "application/json" },
    ).body

    result = JSON.parse(response)

    puts(result)
      
    

If we decide to go with the batch scraping approach, our payload will look like:

  
    {
      "api_key": "YOUR_PAGE2API_KEY",
      "batch": {
        "urls": "https://www.amazon.com/s?k=luminox+watches?page=[1, 3, 1]",
        "concurrency": 1,
        "merge_results": true
      },
      "parse": {
        "watches": [
          {
            "_parent": "[data-component-type=s-search-result]",
            "title": "h2 >> text",
            "link": ".a-link-normal >> href",
            "price": ".a-price-whole >> text",
            "stars": ".a-icon-alt >> text"
          }
        ]
      }
    }
  

Code examples (batch scraping approach)

      
    require 'rest_client'
    require 'json'

    api_url = 'https://www.page2api.com/api/v1/scrape'

    # The following example will show how to scrape 3 pages of products from Amazon.com

    payload = {
      api_key: 'YOUR_PAGE2API_KEY',
      batch: {
        urls: "https://www.amazon.com/s?k=luminox+watches?page=[1, 3, 1]",
        concurrency: 1,
        merge_results: true
      },
      parse: {
        watches: [
          {
            _parent: "[data-component-type=s-search-result]",
            title: "h2 >> text",
            link: ".a-link-normal >> href",
            price: ".a-price-whole >> text",
            stars: ".a-icon-alt >> text"
          }
        ]
      }
    }

    response = RestClient::Request.execute(
      method: :post,
      payload: payload.to_json,
      url: api_url,
      headers: { "Content-type" => "application/json" },
    ).body

    result = JSON.parse(response)

    puts(result)
      
    

The result

  
    {
      "result": {
        "watches": [
          {
            "title": "Men's Luminox Leatherback Sea Turtle 44mm Watch",
            "link": "https://www.amazon.com/Luminox-Leatherback-Turtle-Giant-Black/dp/B07CVFWXMR/ref=sr_1_2?dchild=1&keywords=luminox+watches&qid=1634327863&sr=8-2",
            "price": "$223.47",
            "stars": "4.5 out of 5 stars"
          },
          {
            "title": "The Original Navy Seal Mens Watch Black Display (XS.3001.F/Navy Seal Series): 200 Meter Water Resistant + Light Weight Case + Constant Night Visibility",
            "link": "https://www.amazon.com/Luminox-Wrist-Watch-Navy-Original/dp/B07NYXV77C/ref=sr_1_3?dchild=1&keywords=luminox+watches&qid=1634327863&sr=8-3",
            "price": "$254.12",
            "stars": "4.3 out of 5 stars"
          },
          {
            "title": "Leatherback SEA Turtle Giant - 0323",
            "link": "https://www.amazon.com/Luminox-Leatherback-SEA-Turtle-Giant/dp/B07PBC31N8/ref=sr_1_4?dchild=1&keywords=luminox+watches&qid=1634327863&sr=8-4",
            "price": "$179.00",
            "stars": "4.3 out of 5 stars"
          },
          ...
        ]
      },
      ...
    }
  

How to scrape Amazon reviews

First what we need is to click on the See all reviews link from the Product page.


This will change the browser URL to something similar to:

  
    https://www.amazon.com/product-reviews/B072FNJLBC


The URL is the first parameter we need to perform the reviews scraping.


The HTML from a single review will look like this:

Amazon review HTML

Luckily, the pagination handling is similar to the one described above, so we will use the same flow.

Now it's time to prepare the request that will scrape all reviews.

If we decide to go with the next button approach, our payload will look like:

  
    {
      "api_key": "YOUR_PAGE2API_KEY",
      "url": "https://www.amazon.com/product-reviews/B072FNJLBC",
      "real_browser": true,
      "merge_loops": true,
      "scenario": [
        {
          "loop" : [
            { "wait_for": ".a-pagination li.a-last" },
            { "execute": "parse" },
            { "execute_js": "document.querySelector('.a-pagination li.a-last a').click()" }
          ],
          "stop_condition": "document.querySelector('.a-last.a-disabled') !== null"
        }
      ],
      "parse": {
        "reviews": [
          {
            "_parent": "[data-hook='review']",
            "title": ".review-title >> text",
            "author": ".a-profile-name >> text",
            "stars": ".review-rating >> text",
            "content": ".review-text >> text"
          }
        ]
      }
    }
  

Code examples (next button approach)

      
    require 'rest_client'
    require 'json'

    api_url = 'https://www.page2api.com/api/v1/scrape'

    # The following example will show how to scrape multiple pages of reviews from Amazon.com

    payload = {
      api_key: 'YOUR_PAGE2API_KEY',
      url: "https://www.amazon.com/product-reviews/B072FNJLBC",
      real_browser: true,
      merge_loops: true,
      scenario: [
        {
          loop: [
            { wait_for: "[data-hook=review]" },
            { execute: "parse" },
            { execute_js: "document.querySelector('.a-pagination li.a-last a').click()" }
          ],
          stop_condition: "document.querySelector('.a-last.a-disabled') !== null"
        }
      ],
      "parse": {
        "reviews": [
          {
            _parent: "[data-hook=review]",
            title: ".review-title >> text",
            author: ".a-profile-name >> text",
            stars: ".review-rating >> text",
            content: ".review-text >> text"
          }
        ]
      }
    }

    response = RestClient::Request.execute(
      method: :post,
      payload: payload.to_json,
      url: api_url,
      headers: { "Content-type" => "application/json" },
    ).body

    result = JSON.parse(response)

    puts(result)
      
    

If we decide to go with the batch scraping approach, our payload will look like:

  
    {
      "api_key": "YOUR_PAGE2API_KEY",
      "real_browser": true,
      "batch": {
        "urls": "https://www.amazon.com/product-reviews/B072FNJLBC/?pageNumber=[1, 3, 1]",
        "concurrency": 1,
        "merge_results": true
      },
      "parse": {
        "reviews": [
          {
            "_parent": "[data-hook=review]",
            "title": ".review-title >> text",
            "author": ".a-profile-name >> text",
            "stars": ".review-rating >> text",
            "content": ".review-text >> text"
          }
        ]
      }
    }
  

Code examples (batch scraping approach)

      
    require 'rest_client'
    require 'json'

    api_url = 'https://www.page2api.com/api/v1/scrape'

    # The following example will show how to scrape 3 pages of reviews from Amazon.com

    payload = {
      api_key: 'YOUR_PAGE2API_KEY',
      real_browser: true,
      batch: {
        urls: "https://www.amazon.com/product-reviews/B072FNJLBC/?pageNumber=[1, 3, 1]",
        concurrency: 1,
        merge_results: true
      },
      parse: {
        reviews: [
          {
            _parent: "[data-hook=review]",
            title: ".review-title >> text",
            author: ".a-profile-name >> text",
            stars: ".review-rating >> text",
            content: ".review-text >> text"
          }
        ]
      }
    }

    response = RestClient::Request.execute(
      method: :post,
      payload: payload.to_json,
      url: api_url,
      headers: { "Content-type" => "application/json" },
    ).body

    result = JSON.parse(response)

    puts(result)
      
    

The result

  
    {
      "result": {
        "reviews": [
          {
            "title": "Great watch & easy to read in low light conditions",
            "author": "Paul E. Papas",
            "stars": "5.0 out of 5 stars",
            "content": "I'm a 60+ year old equestrian and outdoorsman. I was looking for a watch that could take the shock of firearm discharge ..."
          },
          {
            "title": "Not Water Resistant, impossible to get amazon help",
            "author": "Benjamin H. Curry",
            "stars": "2.0 out of 5 stars",
            "content": "This watch has a 2 year warranty from date of purchase however after not even on full year my adult son went swimming with it ..."
          },
          ...
        ]
      },
      ...
    }
  

How to export Amazon products to Google Sheets

In order to be able to export our Amazon products to a Google Spreadsheet we will need to slightly modify our request to receive the data in CSV format instead of JSON.

According to the documentation, we need to add the following parameters to our payload:
  
    "raw": {
      "key": "watches", "format": "csv"
    }
  

Now our payload will look like:

{ "api_key": "YOUR_PAGE2API_KEY", "real_browser": false, "raw": { "key": "watches", "format": "csv" }, "batch": { "urls": [ "https://www.amazon.com/s?k=luminox+watches?page=1", "https://www.amazon.com/s?k=luminox+watches?page=2", "https://www.amazon.com/s?k=luminox+watches?page=3" ], "concurrency": 1, "merge_results": true }, "parse": { "watches": [ { "_parent": "[data-component-type=s-search-result]", "title": "h2 >> text", "link": ".a-link-normal >> href", "price": ".a-price-whole >> text", "stars": ".a-icon-alt >> text" } ] } }

Please note that the batch URLs are defined explicitly to make it simpler to edit the payload.


Now, edit the payload above if needed, and press Encode →

The URL with encoded payload will be:


  Press 'Encode'

Note: If you are reading this article being logged in - you can copy the link above since it will already have your api_key in the encoded payload.

The final part is adding the IMPORTDATA function, and we are ready to import our Amazon products into a Google Spreadsheet.
  Press 'Encode'

The result must look like the following one:

Amazon products import to Google Sheets

Conclusion

That's pretty much of it!
As we've previously mentioned, Amazon.com is a website that is pretty easy to scrape if you have the right tools.
This is the place where Page2API shines, making web scraping super easy and fun.

You might also like

Nicolae Rotaru
Nicolae Rotaru
2024-03-28 - 10 min read

How to Download Youtube Transcript for Free

In this blog post, we will explore the step-by-step process of saving the Youtube transcript with Node.js and puppeteer

Nicolae Rotaru
Nicolae Rotaru
2023-09-16 - 10 min read

How to Scrape Tripadvisor Reviews and Perform Sentiment Analysis with AI

In this blog post, we will explore the step-by-step process of scraping Tripadvisor reviews using Page2API, and then performing sentiment analysis on the extracted data using GPT-3.5-turbo.

Nicolae Rotaru
Nicolae Rotaru
2023-05-15 - 6 min read

How to Download Instagram Videos with iPhone Shortcuts

In this article, you will read about the easiest way to download videos from Instagram with iPhone shortcuts and Page2API.

What customers are saying

Superb support
Superb, reliable support, even out of hours, patient and polite plus educational.
October 21, 2023
Very effective and trustworthy
Very effective and trustworthy!
I had some challenges which were addressed right away.
October 12, 2023
Page2API is without fail my favorite scraping API
Not only does Page2API work without fail constantly, but their customer support team is on a new level.
If i ever have issues integrating or have errors in my code they've always been responsive almost instantly and helped fix any errors.
I've never seen customer service like this anywhere, so massive thanks to the Page2API team.
July 14, 2023
Amazing product and support!
I have tried a lot of different scraping solutions and Page2Api is definitely the best one. It's very developer-friendly and Nick is extremely innovative in coming up with new ideas to solve problems.
The support is unreal as well.
I have sent Nick a request that I have trouble scraping and he's helped me fix all of them. Can highly recommend.
April 13, 2023
This API is amazing and the support was GREAT
This API is amazing and I am very excited to keep using it.
I'm writing this review because I was stumped on a very hard scrape for youtube transcripts, I brought my issue to support and in no time they had written what looks like a very tailored and complicated API call for me, I tested it and it worked perfect! Great great support.
April 19, 2023
Excellent service, super technical support!
I have been looking for such a quality for a long time, I have never met such an individual approach to clients.
Everything is at the highest level!
Nick very quickly helped to deal with all my questions, I am very grateful to him!
Recommend!
February 08, 2023
Fantastic Product and Customer Service
I'm a no-code guy trying to hack it in an API world... so I was pretty apprehensive about what I would be getting into with this.
I'm please to say that the customer service is so fantastic that they got me a solution in under 30 seconds that worked instantly in my application.
They did a great job and it works exactly as advertised.
Highly recommend them!
March 24, 2023
Surprisingly great service and support
I have certainly not come across any other internet initiative in the internet world that provides such good technical support and tries to help even if they are not related to them.
I will take as an example the approach of page2api to the customer in the startups I have founded.
February 16, 2023
Perfect for webcrapping javascript generated webpages
Page2API is perfect to be use from bubble or any other nocode tool.
It works submitting forms, scrapping info, and loading javascript generated content in webpages.
January 22, 2023
Best scraping service - tried them all
Hands down the best scraping service there is for a no-coder (...and I've tried them all).
Fast, easy to use, great documentation and stellar support.
Wish I'd found this months and months ago of waisting time at others. Highly recommend!
May 05, 2023
The best web scraper API for Bubble apps
Having tried several web scraper APIs I have found that Page2API is the best web scraper API for integrating with the Bubble API connector.
If you're a Bubble app developer Page2API is the web scraper you've been looking for!
November 30, 2022
Customer service is WORLD CLASS
Nick is serious about his business -- super knowledgeable and helpful whenever we have the slightest problem.
Honestly, the best customer service of any SaaS I've had the pleasure of working with.
10/10.
December 02, 2022
It's a perfect product
This team has a very high sense of responsibility for the product.
They let me know the part I don't know so kindly.
I didn't feel any discomfort when I used it in Korea
June 12, 2023
Highly professional support!
Amazing quick support!
But more than that, an actual relevant and pro help which solved my issue.
April 19, 2023
Incredible
Nick was incredible.
He helped me so much.
Need it for a research project and I highly highly recommend this service.
December 21, 2022
Great product, great support
I was searching for a scraping tool which fits to different types of needs and found Page2API.
The support is amazing and the product, too!
We will use Page2API also for our agency clients now.
Thank you for this great tool!
March 07, 2023
Really good provider for web-scraping…
Really good provider for web-scraping services, their customer service is top notch!
January 25, 2023
Great service with absolutely…
Great service with absolutely outstanding support
December 01, 2022

Ready to Scrape the Web like a PRO?

1000 free API calls.
Based on all requests made in the last 30 days. 99.85% success rate.
No-code-friendly.
Trustpilot stars 4.6