How to improve the wikimedia/wikipedia dataset

#63
by albertvillanova HF staff - opened

This is a space for the community to propose/discuss possible future improvements for the next version of the wikimedia/wikipedia dataset.

The dataset loading script is located at: https://huggingface.co/datasets/wikimedia/wikipedia/blob/script/wikipedia.py

Potential axes:

I have pushed a version of the new Wikipedia script to the "script-html" branch: https://huggingface.co/datasets/wikimedia/wikipedia/tree/script-html

  • It works in streaming mode for the moment

Hi Albert, does the version from the script-html branch fix the data integrity issues (like the list from https://huggingface.co/datasets/wikimedia/wikipedia/discussions/59#66100cb0150eb83552bd997a)?

Normally, that is the intention... We are still working on it...

While working with the Wikimedia Enterprise Snapshot API, we have discovered that some Wikipedia articles appear multiple times (for different revisions). We have contacted their support and they confirm that is the case:

With the "Snapshots" dataset some WME clients want multiple article revisions, while others only need the latest article. Our goal is to let clients choose how to handle older revisions.

two questions:

  • is it possible to have smaller "test" splits with randomly distributed rows from the core dataset for all wikimedia languages?
  • is it possible to download only a subset of the dataset using the python client, e.g. all entries whose title starts between G and P, for example?

Sign up or log in to comment