Many docs and tutorials are from 10+ years ago. Have you had any luck loading the data dumps (not the API) locally in order to play around with them? if yes, I'd very much appreciate it if you could point me in the right direction.
I also didn't find much information about how long it would take to import into a db, so I used the xml dumps directly [1]. I only needed the wiki content (not the history), so the article xml files worked well for me. And then I used mwparserfromhell [2] to parse and extract from the wiki markup.<p>[1] <a href="https://dumps.wikimedia.org/enwiki/20190301/" rel="nofollow">https://dumps.wikimedia.org/enwiki/20190301/</a><p>[2] <a href="https://mwparserfromhell.readthedocs.io/en/latest/" rel="nofollow">https://mwparserfromhell.readthedocs.io/en/latest/</a>
While building the Wikipedia mirror on IPFS (with search), we tried using the dumps from Wikipedia themselves but ended up using Zim archives from kiwix.org instead. The end result is here: <a href="https://github.com/ipfs/distributed-wikipedia-mirror" rel="nofollow">https://github.com/ipfs/distributed-wikipedia-mirror</a><p>For actually ingesting the archives, dignifiedquire expanded a Rust utility aptly named Zim, which you can find here <a href="https://github.com/dignifiedquire/zim" rel="nofollow">https://github.com/dignifiedquire/zim</a><p>Both repos contain information (and code of course) on how to extract information from the Zim archives
I have toyed around with the Wikipedia dump -- in XML, downloaded through the provided torrent file on Wikipedia.<p>It took a bit to get accustomed to the format, but after looking at the files and doing a bit of research on the documentation, using Python with lxml made it relatively straightforward to do what I was interested in.<p>I'd recommend doing the same, only because it worked for me: get the XML dump, manually check out some files to understand what is going on, search for documentation on the file format and maybe read a few blog posts, and then convert the XML files to data structures suited for what you're interested in.
You could also use Special:Export depending on your use case: <a href="https://en.wikipedia.org/wiki/Special:Export" rel="nofollow">https://en.wikipedia.org/wiki/Special:Export</a>
This may not be the most helpful reply but I remember having to use some "importing tool". Wikipedia provides you with standard SQL dumps yet simply importing them into the DB is not going to cut it. The community has created import scripts which simplify the process to a degree.
I used Python to load the contents of the articles into a DB (potentially wrong extract of veeery old code - I have something like 20 different versions lying around therefore I'm not 100% sure that this did work well):<p>===<p><pre><code> import xml.dom.pulldom as pulldom
from lxml import etree
from xml.etree import ElementTree as ET
sInputFileName = "/my/input/wiki_file.xml"
context = etree.iterparse(sInputFileName, events=('end',), tag='doc')
for event, elem in context:
iThisArticleCharLength = len(elem.text)
sPageURL = elem.get("url")[0:4000]
sPageTitle = elem.get("title")[0:4000]
SPageContents = elem.text
<do what you want with these vars...>
</code></pre>
===
I built tools to parse the compressed XML dumps. My computer was pretty underpowered at the time (MacBook air) so I had to very careful to make everything a streaming algorithm. Looking back I basically recreated a shitty map reduce in Python.
I've had some success using this tutorial: <a href="https://www.kdnuggets.com/2017/11/building-wikipedia-text-corpus-nlp.html" rel="nofollow">https://www.kdnuggets.com/2017/11/building-wikipedia-text-co...</a> .<p>And I've changed it a little bit to extract only the first n characters, this might be of some use since wikipedia dump are supposed to be pretty large: <a href="https://github.com/mooss/ruskea/blob/master/make_wiki_corpus.py" rel="nofollow">https://github.com/mooss/ruskea/blob/master/make_wiki_corpus...</a> .
I wrote a simple parser in node to import the article dump into an Elasticsearch instance as a part of a hands on tutorial: <a href="https://github.com/kldavis4/kuali-days-2017-elasticsearch/blob/master/wikipedia/index.js" rel="nofollow">https://github.com/kldavis4/kuali-days-2017-elasticsearch/bl...</a>. At the time, on the full dump, it took quite a while to ingest (days as I recall).