Difference between revisions of "Web scraping"

From Publication Station
Line 93: Line 93:
== Conclusion ==
== Conclusion ==


Scraping allows us to gather data from the web, which can be used in another way, for example in an art installation or to build an unique way of browsing the same content.
Scraping allows us to gather data from the web, which can then be used in another way, for example in an art installation or to build an unique way of browsing the same content.


Scraping can also be automated to run at intervals, for example each week. You could for example scrape  music events from different websites and gather those events on your personal agenda page.
Scraping can also be automated to run at intervals, for example each week. You could for example scrape  music events from different websites and gather those events on your personal agenda page.


What's next. Try scraping other websites and creating multiple selectors. The WebScaper.io [https://www.youtube.com/watch?v=n7fob_XVsbY&t=47s intro video] is a good place to learn more about selectors.
What's next. Try scraping other websites and creating multiple selectors. The WebScaper.io [https://www.youtube.com/watch?v=n7fob_XVsbY&t=47s intro video] is a good place to learn more about selectors.

Revision as of 10:48, 2 September 2022

Web scraping is used to scrape data such as text and images from websites. In this example we will scrape data from the Gutenberg website.

The purpose of web scraping is to transform web content into usable data for other programs or analysis. In this case we transform the following website into CSV data which can be opened in Microsoft Excel or Numbers.

Alice Wonderland Gutenberg.png
Alice Wonderland Scraped.png

Installing

Step 1:

We will use a browser extension called WebScraper.io. You can install the extension for Firefox or for for Chrome.

To learn about all of the functionality in the WebScraper.io extension you can watch the intro video.

Step 2:

Navigate to Alice’s Adventures in Wonderland on the Gutenberg website.

Step 3:

Right click anywhere on the screen and click "inspect". This will open the inspector, a tool commonly used for debugging websites.

Alice Wonderland Inspect.png

Step 4:

You should now have an extra tab called "Web Scraper Dev". Open this tab.

Open WebScraper.io extension.png

Creating a selector

Step 5

Create a new sitemap. Call it for example "alice". The start url is the page you are currently on: https://www.gutenberg.org/files/11/11-h/11-h.htm

Alice-wonderland-create-sitemap.png
Web-scrape-create-sitemap.png

Step 6

Our goal will be to scrape each title and paragraph.

  • Click on "Add new selector".
  • Add an "Id" which makes sense, for example "content".
  • Set "Type" from "Text" to "HTML". We do this because each paragraph can still have HTML inside it.
  • Click "Select". You can now start selecting which elements you would like to scrape. Start with the title and then the paragraphs while holding "shift".
  • Click on "Done selecting"
  • Check the checkbox for "multiple". Otherwise only the first element will be scraped.
  • Click on "Save selector"

Settings for our new selector

Scrape and export data

Step 7

Click on "Sitemap alice" and then on "scrape". Press "Start scraping" to... you guessed it, start scraping 😃.

This will open a new window in which a robot will "scrape" all the content you selected in the previous steps.

When the scraping is done, press the "refresh" button. If all went okay, you should now see some data.

Step 8

We can now export and download the data.

Press "Sitemap alice" and then "Export data". Click the big blue button ".CSV" to download a CSV file. This file can be opened in Microsoft Excel or Numbers.

Conclusion

Scraping allows us to gather data from the web, which can then be used in another way, for example in an art installation or to build an unique way of browsing the same content.

Scraping can also be automated to run at intervals, for example each week. You could for example scrape music events from different websites and gather those events on your personal agenda page.

What's next. Try scraping other websites and creating multiple selectors. The WebScaper.io intro video is a good place to learn more about selectors.