Wikipedia talk:Artificial intelligence
![]() | This project page does not require a rating on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||
|
ClueBot NG
[edit]Thx for this page. May be add the bot against vandalism used by the English Wikipedia?
Pyb (talk) 11:03, 31 January 2023 (UTC)
Not really an essay
[edit]I'd suggest {{Information page}}, since this is in the nature of a summary of the diversity of uses, and doesn't propose policy. Pharos (talk) 14:54, 14 February 2023 (UTC)
Done - Agree with your assessment. - Fuzheado | Talk 16:54, 14 February 2023 (UTC)
Article text generation
[edit]The information under "Article text generation" doesn't reflect how the community feels about this in my experience. As I understand it, using AI to generate article text is very strongly discouraged, including when it's checked for quality, and even competent editors will be scolded for trying. I suggest we remove anything that encourages using it for content writing, and maybe consider replacing it with more practical uses like consulting AI for copyediting suggestions. Pinging the primary author: Fuzheado Thebiguglyalien (talk) 19:51, 6 October 2024 (UTC)
- Agree 100% with TBUA. Anything coming out of an LLM should be considered unreliable by default. Leaving the door open for nonsensical hallucinations (sometimes based on made up sources) to be used on Wikipedia is unacceptable. Especially given how often Wikipedia is used as training data for LLMs, anything from a generative AI that makes its way into a Wikipedia article can make its way back into the LLM and lead to model collapse. There are so many reasons to prohibit writing articles using LLMs, I would advise caution even for copyediting. --Grnrchst (talk) 20:46, 6 October 2024 (UTC)
Following Wikipedia:Village_pump_(policy)/Archive_201#URLs_with_utm_source=chatgpt.com_codes, I have added detection for possible AI-generated slop to my script.
Possible AI-slop sources will be flagged in orange, thought I'm open to changing that color in the future if it causes issues. If you have the script, you can see it in action on those articles.
For now the list of AI sources is limited to ChatGPT (utm_source=chatgpt.com
), but if you know of other chatGPT-like domains, let me know!
Headbomb {t · c · p · b} 22:27, 8 April 2025 (UTC)
AI tool to fact-check articles (proof of concept)
[edit]I have created a proof of concept tool for automating fact-checking of articles against sources using AI. GitHub repository. An OpenAI API key or compatible provider is required (I use BotHub). It is cost-effective; when using gpt-4.1-nano, verification of one 100-word block against a single source (approximately 12,000 characters) costs about 0.1 cent. Functionality:
- The program loads the article text from file and all available sources (text files: source1.txt, source2.txt, etc.).
- It divides the article into blocks of approximately 100 words, preserving sentences.
- For each block and each source:
- Sends a request to the OpenAI API for correspondence analysis
- Receives credibility probabilities for each word
- Combines results for all blocks and sources
- Visualizes the text with color coding based on the obtained probabilities (textmode with all sources combined or GUI allowing to select individual sources)
Installation and usage instructions, along with example screenshots, are available in the README. Bugs are certainly present (almost all code was generated using Anthropic Claude 3.7).
It is also possible to use models hosted locally by installing an OpenAI API compatible LLM server (such as LLaMA.cpp HTTP Server) and directing script to use it with --base_url and --model parameters.
Suggestions and proposals are welcome, but unless submitted as pull requests, they will be reviewed at an indeterminate time. The creation of new tools based on this idea and code is strongly encouraged. Kotik Polosatij (talk) 13:44, 5 May 2025 (UTC)