Gain insight with Data Extraction

Collect data with our Apache Nutch Committer software and gain the insights you need.

Time-saving:

No longer spend time manually collecting and processing data.

Reliable and accurate insights:

With advanced technologies, you're guaranteed quality and relevant data that leads to profound insights.

Flexibility for all users:

Whether you're technically inclined or not, our solutions are designed for everyone.

Make Data Extraction easy with our applications

Collect information from across the web with Data Extraction. Get to know various aspects of data collection, such as crawling, scraping, and parsing. Learn more about the applications we offer.

  • Webcrawler
  • Advanced Parser
  • Entity Extraction
  • Collect specific data from the web
  • Scrape specific websites
  • Avoid duplicate links: detect spider traps
  • Data as a service
Our applications
1/*
Webcrawler

A webcrawler (also known as a spider) roams the internet looking for new pages. The goal of a webcrawler is to index pages for search engines. We help you set up the webcrawler so you don’t have to worry about it.

Advanced parser

Our parser retrieves all kinds of data from the internet. It detects languages, main texts, images and product prices. It also distinguishes an article from a homepage and a forum thread from a webshop product and so on. This allows you to search for specific information.

Entity extraction

Entity extraction determines relevant parts in a text. Identify names, people, companies, organizations, locations, cities, and products in a text. Curious how this works? Try the demo on this webpage!

Collect specific data from the web

Our crawler is able to find particular information on the internet. For instance, it can provide you with a list of domains that use a specific CMS, contain certain words or content. This makes doing research and finding sales opportunities easy.

Scrape specific websites

Use our scraper to gather specific data from certain websites. This is useful if you want to analyze product descriptions from online stores.

Avoid duplicate links: detect spider traps

Our spider trap detector detects and bypasses spider traps. This prevents indexing irrelevant and duplicate pages. We offer the spider trap detector for a fixed license fee across various platforms.

Make it easy for yourself with data as a service

To make things easy for you, we offer Data as a Service, where we handle the crawling, parsing, and scraping for you. With Data as a Service, you automatically receive the data you need, either periodically, or as a one-time delivery. We provide it as a file, a feed, or directly into your application.

Data Extraction demo

Try our demo

Curious about our data extraction? Enter a URL and see which meta-information is directly extracted by our parser.

Data Extraction: the techniques

In Data Extraction, we use the following techniques:

  • Apache Nutch
  • SaX
  • Part of Speech tagging (OpenNLP)
  • Host Deduplication
  • Apache Jena
  • SparQL

Partners

Pricing per month
Pages crawled
Startup fee
Spidertrap detector (optional)
Advanced parser (optional)
Starter
€25,-
10.000
€100,-
€90,-
€200,-
Small
€125,-
100.000
€200,-
€180,-
€400,-
Medium
€500,-
1.000.000
€300,-
€270,-
€600,-
Large
€1.500,-
10.000.000
€400,-
€360,-
€800,-
Enterprise
€3.000,-
100.000.000
€500,-
€450,-
€1.000,-
Custom
€ call
Custom
Custom
Custom
Custom
Pages crawled
10.000
Startup fee
€100,-
Spidertrap detector (optional)
€90,-
Advanced parser (optional)
€200,-
Pages crawled
100.000
Startup fee
€200,-
Spidertrap detector (optional)
€180,-
Advanced parser (optional)
€400,-
Pages crawled
1.000.000
Startup fee
€300,-
Spidertrap detector (optional)
€270,-
Advanced parser (optional)
€600,-
Pages crawled
10.000.000
Startup fee
€400,-
Spidertrap detector (optional)
€360,-
Advanced parser (optional)
€800,-
Pages crawled
100.000.000
Startup fee
€500,-
Spidertrap detector (optional)
€450,-
Advanced parser (optional)
€1.000,-
Pages crawled
Custom
Startup fee
Custom
Spidertrap detector (optional)
Custom
Advanced parser (optional)
Custom

Frequently Asked Questions

Data Extraction is the process of collecting and analyzing large amounts of unstructured data from the web. With tools like our Apache Nutch Committer software, users obtain valuable insights from this data.
A webcrawler (also known as a spider) roams the internet in search of new pages to index for search engines. A scraper, on the other hand, is specifically designed to gather information from certain websites, like product descriptions from online stores.
Entity extraction is the process of identifying relevant entities such as names, people, companies, locations, and more in a text. You can try the (XX_LINK_demo_XX) on our website to see how this works.
A spider trap is a structural issue on websites that causes crawlers to get stuck on endless URLs, leading to the indexing of irrelevant and duplicate pages. Our spider trap detector is designed to detect and avoid these pitfalls.
Use our data as a Service option. With this, all services are provided as a service, and you receive the data you need automatically and periodically without needing technical expertise.

Want to work with us? Mail Jack at info@openindex.io

Or call us at +31 50 85 36 600

Jack Bos