Wikispooks and LittleSis – #SolutionsWatch

by | Oct 12, 2022 | Solutions Watch, Videos | 21 comments

If only there were some handy-dandy websites for finding out more information about the people and organizations we see mentioned in the news. Oh wait, there are! Join James for today’s edition of #SolutionsWatch as he guides you through an exploration of Peter Thiel and shows you some interesting websites along the way.

Watch on Archive / BitChute / Odysee / Substack or Download the mp4

SwissCows search engine



    The link to notes and comments under this article on the home page takes me to torturing the truth!!!

    It’s a good article but I’d like to read this one at present 🙂

  2. Inisfad

    I liked bitchute for a while and the fact that it’s al.ost impossible to get linked to a bitchute video on a web search ,( even if you put “bit hute” in the search) makes me think they are less under the thumb……but their search function sucks so bad.

    BTW how did u get to this comment section?? I only got here following your link in the recent commw comments

  3. Thanks James! That was helpful!
    Although not a wiki or search engine, journalist’s websites (like TheCR) are great sources for information on people and companies. One of my favorite places to search for connections between “characters of interest” is Whitney Webb’s site There is a search function at the top of the page, then just Ctrl F to find the chosen entity in each article. The work is always thoroughly detailed and well sourced, with hyperlinked documents. In fact, a lot more info on our example, Peter Thiel, (and his many tentacles) can be found there. It would be interesting to hear Whitney give tips and tricks she uses for research. She has the unique ability to express Why certain connections are relevant/significant and give names to the mysterious, “They”.

  4. The thing that makes wikispooks so interesting is that they are entirely open source with their site code. You can fully download their entire site source code and database configurations and run a full node/mirror of the site. As a freelance developer that interests me greatly. I am planning to spin up a raspberry pi and try to locally host a mirror of the site just to take it for a test drive and get a handle on the structure of the db. There are a number of reasons from a security stand point why sharing your source code with everyone is a bad idea. Sharing your db structure is even a bigger security threat and could leave the site vulnerable to sql injection attacks. I am certain the developers of the site are aware of these issues and have either overcome them or they have reasoned that sharing the raw code and db is the best way to ensure the site cannot be taken down because of the number of backups that preserve the content.

    I mention all of this as it may be a useful tool along the lines of preventing the burning of the library of Alexandria.

  5. Hi James
    Saw spouts of water coming from deserts, but also huge amounts from Japan. Any ideas why we having so many waterspouts especially in deserts, but also in your lovely Japan?
    take care

  6. Of COURSE Peter Thiel is from Sullivan and Cromwell. The same S & C that had Allen Welch Dulles (CIA Director) and John Foster Dulles (Secretary of the State) as attorneys for the Rockefellers and United Fruit and helped funnel money into Germany to fund Hitler. Of course he would come from that law firm.

    Makes perfect sense now.

  7. Pretty interesting aggregator this Al Ayham. In fact, I like the fact that he publishes all his sources, so I have managed to discover a couple of new interesting sites I didn’t know before.

  8. Thanks, Al Saleh

    A brilliant tool you have built there!

  9. Another nifty tool worth mentioning is an aggregator like Inoreader. Using Inoreader, you can put together your own sources that you later can search. I believe the service only index data from the time you add a source (correct me if I am wrong). For the search part to be more useful, it would be great if the tool also fetched and made the sources searchable backwards in time.


  10. Not me over here spending the whole afternoon sifting through Wikispooks now.

  11. I like these #SolutionsWatch episodes, and always enjoy James walking us through things.
    Geez!…Corbett’s mental transmission fluid must be hypersonic teflon supergrade. The guy can quickly spot things on a page and move that mouse.

    And the comments on the SolutionWatch episodes often offer some great stuff too.
    I like seeing what folks contribute.

  12. Wikispooks is a great resource.
    I’ve actually had a Wikispooks page open on my tracking device since seeing an article that the Australian ABC published at the start of this week that I thought was interesting (
    In the article I found a familiar name – Jane Halton, Who I recognized from Event 201 (She was an Australian representing the ANZ Bank at Event 201 so that caught my attention).
    Reading into the Article I didn’t actually know she was appointed to the Coalition for Epidemic Preparedness Innovations here.
    I did another DuckDuckGo search and eventually found her name on Wikispooks (

    Then the rabbit hole got really deep when on the Wikispooks page I learnt that her Husbands name is Trevor Sutton.
    I wanted to verify this and a 2 second search came up with a Canberra Times article about “top 10 Power Couples in the ACT (

    And for those who are not familiar with what is going on in Australia, Her husband, Trevor Sutton, who is also a senior statistician is the Brother of one “professor” Brett Sutton, Chief Health Officer of Victoria……
    And, well, we all watched what happened in Victoria, Australia didn’t we.

    So yes, Wikispooks 100%

  13. Sullivan & Cromwell?

    Wasn’t that the Wall Street law firm long dominated by the Dulles brothers as the senior partners?

  14. “soon it will ask you to prove you are a bot and you will have to do some kind of quadratic equation” hahah thanks for that James I needed a good laugh.

    Speaking of bots, this made me think of something I learned about Wikipedia a few years back. This will likely not be news to you James but I will share what I learned about the history and nature of Wikipedia in a few comments below for anyone wanting to know more.

    As many of you likely know, Wikipedia is a big part of the emerging pattern of dominant online platforms disseminating corporate propaganda (labelled as “science” or “expert consensus”) and being involved in behavior modification initiatives (intended to perpetuate the status quo). Wikipedia, which Google (Alphabet Corp.) relentlessly promotes, normalizes conflicts of interest by giving the impression that it is an unbiased encyclopedia while taking money, via the Wikipedia Foundation, from drug giants (including Bristol Myers Squibb and Merck), weapons contractors (BAE and Boeing), Big oil (BP, Exxon), Big Tech monopolists (Microsoft and obviously.. Alphabet corp) and banks (Goldman Sachs, JP Morgan). In 2011, donors to the Wikimedia Foundation — -the entity that enables Wikipedia — — included Google co-founders Larry Page and Sergey Brin. When one uses Google to find out about a particular subject, the Wikipedia entry for the given topic usually ranks within the top three returns, along with a mainstream media article along with the given subject’s website (if one exists and if that website is not on Google’s “blacklist”). This is a powerful echo chamber: A tech giant (Google) directs users away from content its programmers consider “unwholesome” (suppression) and towards approved sites (ranking). Wikipedia articles are a constituent of 95% of all Google searches. Even poor quality pages in Wikipedia get millions of hits because they benefit from the popularity of the site.

    Founded in 2001, the English-language Wikipedia now has over six million entries, or “articles”; as “Wikipedians” call them. Around one-third of them were allegedly written by a single man: Steven Pruitt, a contractor with the US federal government (Customs and Border Protection), whose parents met at the Lackland Air Force Base Defense Language Institute’s Russian Department, San Antonio, Texas. In addition to these ties to the military–industrial complex, it is worth noting Wikipedia’s increasing reliance on automation. Reportedly in his spare time and using AutoWikiBrowser, a semiautonomous tool, Pruitt became Wikipedia’s №1 editor with 2.5 million entries to his (and his robot’s) name. Corporate media promoted Pruitt’s achievements, with Time magazine naming him one of the most influential internet users in 2017..

    (continued in another comment…)

    • (continued from a comment above..)

      By that year, Wikipedia’s entries totaled nearly 40 million across 291 languages. Each day, around 860 new articles are added. Edits number 817 million and average over 21 per page. In one month alone (June 2015) over 374 million people visited Wikipedia. If published as books, Wikipedia’s entries would have totaled 15,930 volumes by 2013. The Wikimedia Foundation operates under US law, is directed by a board of trustees and raises money for Wikipedia’s servers and equipment. Between 2006 and 2009, the Foundation morphed from a volunteer-led organization to a global institution with a centralized HQ and paid staff. With early supporters dropping off in protest over the Foundation’s centralization, the Wikimedia Foundation is compared by Professor José van Dijck to the US Corporation for Public Broadcasting and to the Public Broadcasting System/Service (PBS) in terms of its corporate-like structure within the supposed remit of providing a public service. Until 2006, the notion of a massive collective of contributors simply did not apply, with just two percent of editors making over 70 percent of the edits.

      Beginning 2006, elite usage declined but hierarchies remained. The lowest in the pecking-order are blocked users, unregistered users, new users and autoconfirmed users. The middle classes are the administrators, bureaucrats, stewards, and bots. It is interesting that bots are higher on van Dijck’s scale than humans. The elite of Wikipedia are the developers and system administrators.

      By 2010, 16 per cent of all edits were made by bots. “The most active Wikipedians are in fact bots” writes van Dijck, who compares this power concentration to other user-generated content platforms. By 2010, the system administrators consisted of just ten people. Ten out of 15 million users. Introduced in 2002 to save on administration work, Wikipedia’s editors employ an army of bots (457 in 2008) to make automated edits: 3RRBot, Rambot, SieBot, TxiKiBot, and so on. There are generally two kinds of bots: admin bots and co-authoring bots. Admin bots block spam, fix vandalism, correct spelling, discriminate between new and anonymous users, ban targeted users, and search for copyright issues and plagiarism.
      Tools that alert human editors include Huggle, Lupin and Twinkle.

      The co-authoring bots began with Derek Ramsey’s Rambot, which pulled content from public databases and fed it into Wikipedia. Between 2002 and 2010, Rambot created 30,000 articles by pulling data from, among other places, the CIA’s World Factbook — another example of Wikipedia’s ties to the military–industrial complex. Compared to proprietary algorithms such as EdgeRank and PageRank, Wikipedia’s licenses are open, yet new editors are welcomed only “tactically”. Within this system is a techno-elite that designs and operates the system that manages the myriad users. This prompted organizational controls, including the distribution of permission levels and the expansion of exclusion and blocking protocols. The growth of hierarchy resulted in rising complaints about what became a cumbersome bureaucracy, with the writer Nicholas Carr denouncing the supposed egalitarian expression of collective intelligence as a “myth”.

      (continued in another comment below..)

      • (..continued from a comment above)

        Meatbot is a pejorative computer geek term for a human. On Wikipedia, the English-language version contains the WP:MEATBOTS shortcut, which redirects to a subsection of its Bot Policy, which ironically has been edited by at least 38 bots. The policy demands human editors “pay attention to the edits they make” and not sacrifice quality for quantity. The policy holds the given human responsible for the errors of the bot. Coded by Wikipedian programmers known as Pythons, bots have their own anonymity, in some respects. Pythons have developed a bot-building tool known as pywikipediabot (Python Wikipediabot Framework).

        Their edits as distinct users in MediaWiki software do not appear. The bots help to dump all language material into a data repository called Wikidata. As noted, bots are charged with a variety of tasks, including having power over human users. R. Stuart Geiger questions the morality of attempting to put a bot on Wikipedia’s Arbitration Committee, which deals with disputes, such as entry content, vandals, and the banning of repeated rule-breakers.

        With all that being said, I also acknowledge (and have personally confirmed on occasion) that there are indeed bastions of accurate scientific data and biographical info presented on platforms like Wikipedia. These ‘Islands of truth in an ocean of misinformation’ are spaces where relevant, accurate, honest and actively updated scientific inquiry and data (which a few dedicated citizen scientists who have managed to become ‘approved editors’ relentlessly and fastidiously protect and correct). I do not discount the value of such spaces for serving as ‘jumping pads’ to launch into specialized areas of research, however, I do caution that we should not allow the little bit of accurate data which is present there give us the impression that such platforms are a safe space to learn and confirm/disprove subject matter.

Submit a Comment


Become a Corbett Report member