Wikipedia talk:Featured article candidates
| Pages, tools and templates for |
| Featured articles |
|---|
Image/source check requests
[edit]
Requests should only be posted here for FAC nominations that have attracted several reviews and declarations of support. Premature requests can be removed by any editor.
- Wikipedia:Featured article candidates/Mascarene teal/archive1 - requesting a particularly detailed source review, as three sourcing mistakes were discovered during a regular review. FunkMonk (talk) 02:50, 4 December 2025 (UTC)
- Done, but several sources are books I don't have access to. Jo-Jo Eumerus (talk) 08:54, 4 December 2025 (UTC)
- I have some digital versions I can send you if you send me an email? FunkMonk (talk) 11:43, 4 December 2025 (UTC)
FAC mentoring: first-time nominators
[edit]A voluntary mentoring scheme, designed to help first-time FAC nominators through the process and to improve their chances of a successful outcome, is now in action. Click here for further details. Experienced FAC editors, with five or more "stars" behind them, are invited to consider adding their names to the list of possible mentors, also found in the link. Brianboulton (talk) 10:17, 30 August 2016 (UTC)
FAC source reviews
[edit]For advice on conducting source reviews, see Wikipedia:Guidance on source reviewing at FAC.
Recent trend upwards in reviewers per article
[edit]In the monthly FAC reviewing statistics I say that the average promoted FAC receives between six and seven reviews. I don't recall how long ago I calculated that number but I thought it was time to check it, and it appears to be trending up slowly over the last couple of years. I'll change the statistics boilerplate to say "between seven and eight reviews", but I thought people might like to see the graph, so here it is. Mike Christie (talk - contribs - library) 09:49, 19 September 2025 (UTC)
- That's interesting data ... shows a healthy ecosystem. Out of curiosity: do the statistics count the number of first-time nominators per year? (If it is not readily available, don't do any extra work, I'm just curious). Noleander (talk) 12:45, 19 September 2025 (UTC)
- I'm not convinced this is significant. Sure, if you cherry-pick the range from March 2023 to the last data point on the graph, it shows an increase. But it's really not any bigger than the noise. I would not want to say anything about the trend without doing some careful statistics, but my general feeling on these things is that if the trend is so vague that you need to do careful statistics to prove it exists, then it probably doesn't. My overall take on the data is that it's basically flat since 2015. RoySmith (talk) 13:07, 19 September 2025 (UTC)
- That's fair; I agree it might be noise. The trend, such as it is, goes back to 2022, and the 2025 average (7.6) is higher than it has been since 2012, but not by much. However, the average number of reviews is definitely over 7 on most timescales so I think the wording change is justified. Mike Christie (talk - contribs - library) 16:09, 19 September 2025 (UTC)
- I'm not convinced this is significant. Sure, if you cherry-pick the range from March 2023 to the last data point on the graph, it shows an increase. But it's really not any bigger than the noise. I would not want to say anything about the trend without doing some careful statistics, but my general feeling on these things is that if the trend is so vague that you need to do careful statistics to prove it exists, then it probably doesn't. My overall take on the data is that it's basically flat since 2015. RoySmith (talk) 13:07, 19 September 2025 (UTC)
Noleander, here's that data. For 2025 this is only about six months worth of data as FACs are only added to the database when archived or promoted, so the data runs a month or two behind calendar date. Mike Christie (talk - contribs - library) 16:09, 19 September 2025 (UTC)
- Thanks for the data. Good to see plenty of new nominators: one or two per week. Noleander (talk) 16:14, 19 September 2025 (UTC)
| Year | 1st time nominators |
| 2006 | 416 |
| 2007 | 719 |
| 2008 | 431 |
| 2009 | 288 |
| 2010 | 242 |
| 2011 | 162 |
| 2012 | 156 |
| 2013 | 141 |
| 2014 | 98 |
| 2015 | 95 |
| 2016 | 85 |
| 2017 | 79 |
| 2018 | 79 |
| 2019 | 58 |
| 2020 | 85 |
| 2021 | 93 |
| 2022 | 63 |
| 2023 | 84 |
| 2024 | 109 |
| 2025 | 53 |
Research paper about Wikipedia
[edit]This is not directly relevant to FAC but some of you might find this paper interesting. [1]. Graham Beards (talk) 09:53, 27 September 2025 (UTC)
- Interesting. There are a few user scripts that can highlight questionable sources (e.g. those condemned at WP:RSP): I wonder if anyone's written one that cross-references RetractionWatch to do something similar? UndercoverClassicist T·C 10:58, 27 September 2025 (UTC)
- The article says there is a WP bot that tags or marks citations if they are retracted. I was unaware of that, apparently it is User:RetractionBot? It looks like there is a category of pages so tagged: Category:Articles citing retracted publications, but it only has about 26 articles (that count is limited to articles where the retraction tag is not yet reviewed & analyzed). That seems like a small number. Another category Category:Articles intentionally citing retracted publications has about 270 articles, but apparently some editor(s) validated that the retracted article should be retained for some reason. But maybe I'm looking in the wrong place. Noleander (talk) 13:48, 27 September 2025 (UTC)
- @Noleander: Thanks for the ping! Yes, RetractionBot takes data from RetractionWatch and marks sources appropriately where found. I need to get my head around the methadology of the paper more, any why the bot is missing a large number of the retracted papers mentioned in the article - but that's a task for the weekend and some database dump analysis!
- For clarity, I've checked my emails and the paper authors have made no attempt to reach out to me - I may well reach out to see if I can improve my methods, however they seem to be using an external source to generate the cross-references. Mdann52 (talk) 17:10, 1 October 2025 (UTC)
New Commons help desk
[edit]I just saw an announcement today that Commons has spun up a new help desk. This should be of interest to any FAC authors who have found images they want to use but can't because they're not properly licensed:
This page is a forum for requesting permission from copyright holders for their works to be used under a license compatible with Commons. Experienced volunteers will reach out to the copyright holder, request their permission to release the work under a compatible license, and help facilitate the issuance and verification of that release so that the work can be uploaded to Wikimedia Commons and used in Wikimedia projects
RoySmith (talk) 15:45, 12 October 2025 (UTC)
FAC reviewing statistics and nominator reviewing table for September 2025
[edit]Here are the FAC reviewing statistics for September 2025. The tables below include all reviews for FACS that were either archived or promoted last month, so the reviews included are spread over the last two or three months. A review posted last month is not included if the FAC was still open at the end of the month. The new facstats tool has been updated with this data, but the old facstats tool has not. Mike Christie (talk - contribs - library) 01:30, 18 October 2025 (UTC)
Reviewers for September 2025
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Supports and opposes for September 2025
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The following table shows the 12-month review-to-nominations ratio for everyone who nominated an article that was promoted or archived in the last three months who has nominated more than one article in the last 12 months. The average promoted FAC receives between 7 and 8 reviews. Mike Christie (talk - contribs - library) 01:30, 18 October 2025 (UTC)
Nominators for July 2025 to September 2025 with more than one nomination in the last 12 months
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
-- Mike Christie (talk - contribs - library) 01:30, 18 October 2025 (UTC)
Note to coords
[edit]@FAC coordinators: we're back over fifty noms running at the moment and the page is getting a little heavy. There are several nominations that could be closed off one way or the other to streamline the page a little. Cheers - SchroCat (talk) 08:17, 19 October 2025 (UTC)
- Thanks for the note, Schro. I've closed seven nominations and I'm keeping an active eye on the open noms. While we're at it, FACBot is acting up. It's not using "external" link properly and not removing the hatnote about the nom being open. Hawkeye7, would you mind taking a look at this please? FrB.TG (talk) 10:21, 20 October 2025 (UTC)
- I replaced the old FACBot featured article task with a new version written in the C# language. My apologies for any instability that may result. I have corrected the issues with the external link and removal of the {{hatnote}}. Please report any further issues that you have. Hawkeye7 (discuss) 00:11, 21 October 2025 (UTC)
- Hey Hawkeye7, the link issue seems to have been fixed but FACBot still leaves the hatnote in place. [2] FrB.TG (talk) 04:17, 22 October 2025 (UTC)
- D'oh! Should be fixed now. Hawkeye7 (discuss) 01:07, 23 October 2025 (UTC)
- Hey Hawkeye7, the link issue seems to have been fixed but FACBot still leaves the hatnote in place. [2] FrB.TG (talk) 04:17, 22 October 2025 (UTC)
- I replaced the old FACBot featured article task with a new version written in the C# language. My apologies for any instability that may result. I have corrected the issues with the external link and removal of the {{hatnote}}. Please report any further issues that you have. Hawkeye7 (discuss) 00:11, 21 October 2025 (UTC)
- Hi Hawkeye7, For some reason this archiving lists me as being the person who archived the nom, when it was actually FrB.TG... Not sure what happened there! Cheers - SchroCat (talk) 05:27, 27 October 2025 (UTC)
Minimum number of supports?
[edit]I don't see anything mentioning this, so I'm curious what the minimum number of supports needed for a FAC to pass is? This isn’t including image and source reviews, of course, just reviews of the article itself Crystal Drawers (talk) 20:13, 20 October 2025 (UTC)
- There isn't one. -- Guerillero Parlez Moi 20:39, 20 October 2025 (UTC)
- Technically true, but I don't think a FAC has ever been promoted with only one support. It's very rare for a FAC to be promoted with two supports. On the other hand a FAC can be archived even if it has multiple supports, if there are well-reasoned opposes -- this is an example. Mike Christie (talk - contribs - library) 22:41, 20 October 2025 (UTC)
- Three is a good rule of thumb but not a firm guideline Generalissima (talk) (it/she) 00:12, 21 October 2025 (UTC)
Make "Urgent nominations" list more visible?
[edit]I wanted to do a FA review today, and I figured I'd do one of the articles in the List of FA Nominations in desperate need of reviews. Every time I try to find that list (about once a month) it takes me 10 minutes. I always start my search at WP:FAC, but I could not find it there. I did eventually find it today in the side bar of Wikipedia talk:Featured article candidates.
And, turns out, a link is in the WP:FAC page: tucked away in the body text of the intro paragraphs "Nominations in urgent need of review are listed here." It is sort of hidden, plus it requires a one to be on the lookout for the specific word "urgent".
Query: Is it possible to add a link to the FAC Urgent list into the WP:FAC page sidebar "Tools" section? That's where I always look first. Not a big deal ... maybe I'm the only one having a hard time finding it. Noleander (talk) 00:00, 23 October 2025 (UTC)
- Heavily agree with all this. Generalissima (talk) (it/she) 03:28, 24 October 2025 (UTC)
- It's worth noting that WP:FLC has the urgent nominations transcluded right above everything else, which I think works well — FAC could consider mirroring it. —TechnoSquirrel69 (sigh) 05:29, 24 October 2025 (UTC)
- That would work for FAC. Alternatively, something tiny in the sidebar Would also be good. Noleander (talk) 10:22, 24 October 2025 (UTC)
- Done. Let me know if the current placement works. FrB.TG (talk) 13:04, 24 October 2025 (UTC)
- Thanks. That looks great! Noleander (talk) 14:30, 24 October 2025 (UTC)
Possible central FA project page
[edit]- The difficulty in finding the correct page for our processes was most recently raised by Sandy here, and this is the same kind of issue. Basically, we have a hell of a lot of pages with little or no centralisation. Following on from Kusma's point, we might want to think about something like this:
| Featured Article main landing page | |||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| The following discussion has been closed. Please do not modify it. | |||||||||||||||||||||||||
Et cetera. |
—Fortuna, imperatrix 13:36, 24 October 2025 (UTC)
- In principle, a central landing page (like WP:GAN has) is a good idea, but I think we'd need to cut that version down substantially for it to be of any use -- I'd venture that 99% or so of people going anywhere near an FA-process-related page want WP:FAC, and even knowing the system fairly well it took me a while to figure out what I'd click on to get there. It might be worth thinking slightly brutally about priorities -- which pages we want to present as "headlines" to users and which might be able to tolerate being less visible (the log, for example). 13:44, 24 October 2025 (UTC) UndercoverClassicist T·C 13:44, 24 October 2025 (UTC)
- Absolutely, but I want to avoid WP:BIKESHED. In fact, my original version only had about five tabs, with one of them being "Miscellaneous" (as you say, for the logs, coordination, etc, which are pretty inside baseball for those inside the project, let alone outside it). I went with this version merely as an exemplar of how many pages we have, and so making it easier (hopefully) to select those of greater importance. —Fortuna, imperatrix 13:51, 24 October 2025 (UTC)
- GA and PR both have excellent landing pages: their top row of tabs is much easier to use than the sidebar approach currently used by FA and DYK. GA has two rows of tabs, which avoids clutter, and might work better for FA than the one-row FA example shown (collapsed) above. If FAC would be the most commonly accessed tab, then it should be the leftmost tab. I like the idea of a centralized FA landing page. Noleander (talk) 14:42, 24 October 2025 (UTC)
- Would this be a heading on the current WP:FA or something else?--Wehwalt (talk) 15:29, 24 October 2025 (UTC)
- @Wehwalt: That would probably be workshopped, at the moment I'm just opening the conversation. —Fortuna, imperatrix 12:04, 26 October 2025 (UTC)
- I could probably get used to something along these lines if there were less tabs across the top. A centralised page and a little more coherence in approach would be beneficial, I think. We've never (to the best of my knowledge - although I'm sure people will correct me) broken down the process and made it streamlined or coherent in any meaningful way - not surprising when it's a process that's grown organically from the early days, but it would be worth putting in some effort to rationalise the process and the pages into a logical format. - SchroCat (talk) 14:59, 7 November 2025 (UTC)
- I like the idea, and I like creating something similar to PR or GA's tabs that this proposal does. I think some of the topics could be merged. For example: merge the "general discussion" with "FAC discussion", merge "Log" and "Statistics", remove "FAC coordination" (I think that's mostly for the FAC coordinators, not the average editor), and put the template box at the top of the "List of FA candidates" page instead of having its own tab. Z1720 (talk) 15:05, 7 November 2025 (UTC)
- Double row and slightly slimmed down could provide us with something along these lines (I've left the raw links so people can see where they're heading, but we can gloss them later):
I'm not convinced we need all the links (stats and log are superfluous to my mind, bit others may disagree). I think we could probably get away with the page Wikipedia talk:Featured articles and have it redirected here to have one centralised page, but again that may not be agreed on by everyone. - SchroCat (talk) 14:57, 10 November 2025 (UTC)
- Wouldn't we want to have a direct link to Wikipedia:Featured article review somewhere in the header? The FAR process and the FAC process are pretty tightly connected - we use to even transclude all the FARs at the bottom of the FAC page until the transclusion limit was getting routinely exceeded. 15:16, 10 November 2025 (UTC)
- Just saw this per an edit summary, not opposed to the idea (although undeniably WT:FAC is the main landing page), but I am opposed if FAR is excluded. A cleaner tab page would present the main landing talk pages: WT:FAC, WT:FAR, WT:TFA, along with WP:FA, WP:WIAFA and WP:FAS, and leave off everything else -- the internal working templates, how-tos, mentoring pages, logs, urgents, etc. We only need to make sure people not familiar with the process land on the right process page, where they can find the rest. Overcomplicating the one landing page will make it less useful. SandyGeorgia (Talk) 15:35, 10 November 2025 (UTC)
- GA has merged their GAN/GAR/GA talk pages to WT:GA. DYK also has one talk page for everything. Would this be something FA would also want? Merging the talk pages might help make more editors aware of what is happening in other aspects of the FA project and reduce the number of talk pages in this heading. Z1720 (talk) 16:41, 10 November 2025 (UTC)
- I assumed that this proposal included the notion of a single FA Talk page. But when I look at the example tab layout shown above in the collapsed "Featured Article main landing page" table, I see it shows two Talk pages: "General Discussion" and "Candidate discussion". I support a single, consolidated FA Talk page. Does anyone want 2 or more FA-related Talk pages? If so, we could have a discussion focused on that choice. Noleander (talk) 16:53, 10 November 2025 (UTC)
- I'd like to see a discussable proposal before we rule things in or out. Wehwalt (talk) 20:26, 10 November 2025 (UTC)
- I assumed that this proposal included the notion of a single FA Talk page. But when I look at the example tab layout shown above in the collapsed "Featured Article main landing page" table, I see it shows two Talk pages: "General Discussion" and "Candidate discussion". I support a single, consolidated FA Talk page. Does anyone want 2 or more FA-related Talk pages? If so, we could have a discussion focused on that choice. Noleander (talk) 16:53, 10 November 2025 (UTC)
Use of LLM in reviewing
[edit]With the rise of AI in recent years, I think it's important to discuss its impact on FAC process as well. Unless I missed it, there is no Wikipedia-wide policy that covers this topic. Do you think using LLM should be allowed when reviewing an FA nomination? If so, should the reviewer disclose this for full transparency? As a coordinator, I'm inclined to dismiss a review which is completely AI-generated without any human input and disregard it like I would a quick drive-by non-substantial review. I would like the community to reach a consensus on this and perhaps have it in writing in the FAC instructions. FrB.TG (talk) 07:23, 26 October 2025 (UTC)
- Related to this: Wikipedia:Writing articles with large language models, a proposed guideline with an ongoing RfC. FrB.TG (talk) 08:38, 26 October 2025 (UTC)
- As a nominator, I would decline to even read a LLM generated review. I think it is disrespectful. If I wanted one, I could have generated one prior nominating the article by myself. The thing produces walls of mostly useless text, am I supposed to address all of that point-by-point? I am here to collaborate with humans, not with LLMs. And the next article of my pet topic I nominate, the AI makes the same mistakes again, so I have to address those again because the LLM does not learn, showing me that my responses are pointless – that is super frustrating. I think that the human aspect at FAC is critical, and that we should stick with that. --Jens Lallensack (talk) 08:02, 26 October 2025 (UTC)
- ↑↑↑↑ What he said. (If someone wants to use an LLM as a prompt to point out issues for themselves and then that editor writes their own text to justify their position with reference to our policies and guidelines, that's different, but let's not have second-rate computer dross in place of reasoned thought and analysis.) - SchroCat (talk) 08:14, 26 October 2025 (UTC)
- Instinctively, I'm with both of the above -- it's extremely bad form to half-arse (bluntly) your work as a reviewer and then expect a nominator to put a lot of work engaging with it. There are some tricky things that make implementing a rule like "no AI-generated reviews" complicated -- we can have a rule about blindingly obvious all-AI submissions, like they have at CSD, but it's going to be difficult to create a rule that can go much further than the tip of the iceberg and still be workable in practice. I've written elsewhere that I'm very sceptical about the use of AI for article writing, and that even "good" LLM use can be a really big problem. On the other hand, I struggle to completely do away with the argument that, if the points raised are good, who cares whether they came from the reviewer, their friend reading over their shoulder, or an LLM? UndercoverClassicist T·C 11:13, 26 October 2025 (UTC)
- Thinking aloud here as much as anything, I suspect (I hope) that there is general support for something like what I shall call the SchroCat parenthesis position. The question has probably been raised here partly to generally inform the community, partly to gain input from a wider pool of editors and partly to see if it would be helpful to come up with a rule regarding how coordinators and - to my mind more importantly - nominators should react to reviews suspected of being AI generated. One can imagine a newer nominator spending a lot of time and effort on an AI-generated review, or even giving up in despair, unaware that the coordinators were never going to give that review any weight. I think the question is: what if anything do we wish to do about this issue; and as a supplementary: does the community wish to give any instruction, advice or general comments to the coordinators? As I said earlier, more a stream of consciousness than a careful analysis. Gog the Mild (talk) 11:56, 26 October 2025 (UTC)
unaware that the coordinators were never going to give that review any weight
: I wonder if you've answered your own question here: should the co-ordinators aim to get to reviews which, for whatever reason, aren't going to be considered within scope before the nominator does, and indicate why they won't be given (much) weight in a closing discussion? This obviously applies to LLM reviews, but I can think of a few recently (again, without wishing or intending to point specific fingers) where it might have been reasonable to say (for example) "these comments are personal preference rather than FAC criteria: the nominator is welcome to act on them if they want, but they won't hold up the process and we will discount any oppose vote based entirely on them". UndercoverClassicist T·C 12:04, 26 October 2025 (UTC)- Traditionally the coordinators have shied away from stating what weight they might give an individual review - or SFAIAA ever attempted to reach a collective position on such a weighting - during the review process. A little less so when closing. I think that my fellow coordinators and I could work with that suggestion, but anticipating possible problems down the road we would feel happier if this were the (reasonably) clearly expressed wish of the community. Although this may be horse and carting - it presupposes that overt LLM/AI input at reviews is not welcome. I am aware of WP:HATGPT and WP:LLMCOMM and of Fortuna imperatrix mundi's comment below, but this is FAC and sometimes we do things our own way. Gog the Mild (talk) 12:38, 26 October 2025 (UTC)
- My generation tends to use it on a regular basis, and it can be useful if used in the right way. Fully or mostly AI-generated reviews should be disregarded, but it can still help spot inconsistencies and obvious errors that the human eye might miss. MSincccc (talk) 13:27, 26 October 2025 (UTC)
- SchroCat, I agree that
second-rate computer dross in place of reasoned thought and analysis
is a nobrainer; I'd go further, even, and suggest we'd be bringing the project into disrepute—and certainly cheapening the product—by allowing AI to have an influence on what, after all, is meant to beWikipedia's very best work
. —Fortuna, imperatrix 14:16, 26 October 2025 (UTC)- @Fortuna imperatrix mundi I would agree as well. MSincccc (talk) 14:30, 26 October 2025 (UTC)
- Thinking aloud here as much as anything, I suspect (I hope) that there is general support for something like what I shall call the SchroCat parenthesis position. The question has probably been raised here partly to generally inform the community, partly to gain input from a wider pool of editors and partly to see if it would be helpful to come up with a rule regarding how coordinators and - to my mind more importantly - nominators should react to reviews suspected of being AI generated. One can imagine a newer nominator spending a lot of time and effort on an AI-generated review, or even giving up in despair, unaware that the coordinators were never going to give that review any weight. I think the question is: what if anything do we wish to do about this issue; and as a supplementary: does the community wish to give any instruction, advice or general comments to the coordinators? As I said earlier, more a stream of consciousness than a careful analysis. Gog the Mild (talk) 11:56, 26 October 2025 (UTC)
- Is this a theoretical question, or have people actually been submitting LLM-generated reviews? I think the current generation of LLMs are wonderful for research; I use them often as a better search engine. But there's no way we should be allowing LLM-generated reviews and I'm shocked at the thought that people might actually be doing that. They are a tool. Used wisely, they increase productivity. Used unwisely, they produce garbage. Using them to generate a FAC review is a prime example of the latter. RoySmith (talk) 11:01, 26 October 2025 (UTC)
- I am unaware of any statements that a review is AI generated, but IMO the question is not theoretical. That said, I am not sure that finger pointing at this point would be helpful. Gog the Mild (talk) 11:31, 26 October 2025 (UTC)
- There have been LLM GA reviews. CMD (talk) 12:10, 26 October 2025 (UTC)
- I recall a case where an editor clearly used an LLM during a FAR. The results were notably weak and poorly supported compared to even the least substantiated human reviews I have seen on Wikipedia. Borsoka (talk) 13:39, 26 October 2025 (UTC)
- Currently our best response—and akin to the community's view—is to follow WP:HATGPT, which provides that
comments that are obviously generated (not merely refined) by a large language model (LLM) or similar AI technology may be struck or collapsed with {{Collapse AI top}}
, following a pretty decisive RfC earlier this year. —Fortuna, imperatrix 12:14, 26 October 2025 (UTC) - The {{FAC-instructions}} template, which is transcluded at the top of the WP:FAC page, states that
For a nomination to be promoted to FA status, consensus must be reached that it meets the criteria. Consensus is built among reviewers and nominators; the coordinators determine whether there is consensus.
To me, the self-evident reading of that is that an LLM does not get a seat at the table, so to speak, in the consensus-building process of determining whether the article meets the WP:Featured article criteria (I also see that the {{Collapse AI top}} template explicitly saysLLM-generated arguments should be excluded from assessments of consensus.
). I do agree with the caveat SchroCat and UndercoverClassicist point out, however: if the article, say, uses the word "effect" where the correct word would have been "affect", it doesn't really matter whether the reviewer spotted that themselves, looked it up in a dictionary, or had it pointed out to them by a friend or even an LLM. TompaDompa (talk) 16:10, 26 October 2025 (UTC) - LLMs are locomotives rolling down the tracks. Any policy should focus on how LLMs should be used – not if. There is a huge difference between using an LLM to write an FAC review (bad), vs using an LLM as a tool when writing an FAC review (possibly helpful). WP:LLMCOMM already addresses the use of LLMs as a tools (e.g. grammar correction; English-as-second language; giving the reviewer ideas to pursue). But WP:LLMCOMM is a bit out-of-date because it relies on a sharp distinction between LLMs (discouraged as tool) and "dedicated translation tools" or "dedicated grammar checker tools" (permitted as tools). That distinction was perhaps valid a year or two ago, but is now blurry and is fading over time. For those reasons, any LLM policy should focus on guardrails rather than strict prohibition. Guardrails such as:
- Disclosure and transparency
- Use LLM only as an assistive tool, not a creative process
- Human validation and review of all LLM output: Never relying on LLM output at face value
- Never substitute LLM for human judgement and insight
- Never use LLM for material that you yourself could not generate organically
- I already use several software tools when performing an FAC review: copyright violation tool, citation validation tool, authorship percentage tool, external links tool, duplicate links tool, etc. Can guidelines be established that permit LLMs to fit into that toolbox? Noleander (talk) 23:34, 26 October 2025 (UTC)
- I have seen LLMs being used in pre-FAC review. Not commonly but as advice for potential improvements I think they can work. With respect to using LLMs for reviews themselves, I think I'd want to know how they are being used - prose check, source spotcheck? I don't think that a !vote by a LLM should be factored in, unless someone were to got to the length of writing a Wikipedia FA-specific LLM that is trained in respecting WIAFA. Jo-Jo Eumerus (talk) 17:02, 27 October 2025 (UTC)
- I do not think that LLMs should be used in the FAC process. I do not think that they should have any place in a "toolbox" when it comes to FAC reviewing (or honestly with editing on Wikipedia in general). In my humble opinion, there are far too many legitimate concerns and criticism about the usage of LLMs. While there are benefits to using LLMs for higher-level research and work, I do not think that editing and reviewing on Wikipedia falls into that category. An editor can review a FAC without LLMs. I do not see a strong argument for them. I agree with above comments in that I would also not read or engage with a LLM-generated review. Just because something exists, it does not mean that its usage should be encouraged or even permitted. I completely agree with everything that Jens Lallensack has said above. Aoba47 (talk) 19:18, 27 October 2025 (UTC)
New bot version
[edit]@FAC coordinators: I have released a new version of the FACBot. Functionality is unchanged but please report any issues. Hawkeye7 (discuss) 02:15, 30 October 2025 (UTC)
- Since we're on the topic, what exactly is the purpose of the "update daily marker" edits? Is there some operational difference between how nominations above or below the "Older nominations" break are treated? RoySmith (talk) 12:46, 31 October 2025 (UTC)
- Yes. Gog the Mild (talk) 14:07, 31 October 2025 (UTC)
- If I may be so bold, might I enquire as to what that difference is? RoySmith (talk) 15:09, 31 October 2025 (UTC)
- I am unaware of any purpose, and I cannot speak for my fellow coordinators, but I use the three-week break to suggest
- The earliest date at which a nomination has given potential reviewers a fair chance to comment and so might be promoted. Given a convincing consensus to do so of course.
- A reminder to check if a nomination has attracted two or more solid supports but not either a source or image review and so needs listing at Image and source check requests.
- The earliest date at which I might notify a nominator that their nomination is likely to be archived if it does not display some prompt movement towards a consensus to promote.
- A useful point at which to check if a nomination has 2 or 3 solid supports and no ongoing reviews and so should be considered for listing at Urgents.
- Once past the three-week mark I will check nominations routinely for these four things. If the number of older-than-three-week nominations is high, it is a prompt to up their incidence; if it is lower, that I can relax a little and perhaps concentrate on other aspects of FAC, other aspects of Wikipedia, or even - gasp - real life. Gog the Mild (talk) 17:48, 31 October 2025 (UTC)
- I am unaware of any purpose, and I cannot speak for my fellow coordinators, but I use the three-week break to suggest
- If I may be so bold, might I enquire as to what that difference is? RoySmith (talk) 15:09, 31 October 2025 (UTC)
- It was part of the manual procedure. The FACBot merely automates it. Originally there was an Urgent list, but the coordinators found it burdensome to update, so it was replaced with the Older nominations header on 20 February 2010.[3] You can read the discussion here. The FACBot has being doing the job since 2014. Hawkeye7 (discuss) 06:37, 1 November 2025 (UTC)
- What happened with Special:Diff/1322382676? Early life and education of Donald Trump and El Alma al Aire got moved below the line, but Canu Cadwallon stayed on top. RoySmith (talk) 16:13, 16 November 2025 (UTC)
- ...the older nominations were moved to the older nominations section? ~~ AirshipJungleman29 (talk) 16:17, 16 November 2025 (UTC)
- Well, yeah, but my point was why didn't El Alma, which was between Trump and Cadwallon, also get moved? RoySmith (talk) 16:19, 16 November 2025 (UTC)
- ...the older nominations were moved to the older nominations section? ~~ AirshipJungleman29 (talk) 16:17, 16 November 2025 (UTC)
- What happened with Special:Diff/1322382676? Early life and education of Donald Trump and El Alma al Aire got moved below the line, but Canu Cadwallon stayed on top. RoySmith (talk) 16:13, 16 November 2025 (UTC)
- Yes. Gog the Mild (talk) 14:07, 31 October 2025 (UTC)
Do reviewers tend to leave the candidate?
[edit]The nomination of Wikipedia:Featured article candidates/Cube/archive1 takes five weeks or more in the process, most of which is the absence of reviewers. Are there any reasons, like exhaustion from reviewing the article, that cause one to focus on other articles? Should I find other reviewers? As far as I am concerned, the review will be closed due to inactivity, probably after more than three months. Dedhert.Jr (talk) 05:15, 31 October 2025 (UTC)
- If a reviewer has gone silent for more than a few days, I think a polite ping from the nominator to remind them is entirely appropriate. RoySmith (talk) 12:37, 31 October 2025 (UTC)
- In my experience, if a reviewer stops commenting that means either (a) they got too busy and forgot; or (b) they do not Support promotion, and are politely conveying that fact by remaining silent; or (c) they are observing other reviews (of the same nomination) and waiting for them to complete. After a reviewer is reminded, they often resume the review or add a closing comment stating Support/Oppose/Neither. But they are not obligated to respond, and sometimes remain silent. Noleander (talk) 14:54, 31 October 2025 (UTC)
- I get that people get busy, go into lurking mode, etc, but to not respond after being pinged is just rude. If you are not going to continue the review, say so. Don't just be a roadblock. RoySmith (talk) 15:06, 31 October 2025 (UTC)
- I agree. Not only is it rude to the nominator, the reviewer is also forcing the coordinators to waste time asking "Anything else you want to add to your review?". I confess I was one of the reviewers that failed to wrap-up my review of the article under discussion here. I thought I was being polite, but I should have at least written "No more to add" or another wrap-up statement. I have remedied that oversight. Noleander (talk) 15:32, 31 October 2025 (UTC)
- I get that people get busy, go into lurking mode, etc, but to not respond after being pinged is just rude. If you are not going to continue the review, say so. Don't just be a roadblock. RoySmith (talk) 15:06, 31 October 2025 (UTC)
Quality of Grokipedia articles is woeful
[edit]Grokipedia has been live for several days. Out of curiosity, I examined some of its articles that corresponded to my own Wikipedia FA articles. The quality is abysmal:
- No sense of proportion: it over-emphasizes some minor topics, and under-emphasizes some major topics
- Context is missing in many important places: so the reader cannot fully understand the material
- It has a very strong Musk/conservative leaning: In one of the articles, it slanted some material so far right that it became factually wrong
- One of the articles had a factual mistake: the accompanying citation was an opinion piece published by an undergraduate student: the piece was hosted on the University of Chicago website; apparently Grok AI thought it was a valid academic source.
- Some material is repeated three or four times within a single article, in various sections.
- No images in any articles, which makes for a very dry appearance
- No tables, no graphs, no diagrams
Textually, the Grokipedia articles look impressive: the phrasing is certainly encyclopedic, and there were quite a few footnotes (though many are to dubious sources).
Bottom line: Grokipedia is extremely unreliable at this point in time. The most concerning thing is that it is clear that Grok AI is deliberately over-weighting sources that are aligned with Musk's personal philosophies: that weighting is evident in the articles, and makes the entire enterprise suspect. Noleander (talk) 03:27, 1 November 2025 (UTC)
- Agreed. When I've compared it to Wikipedia articles I'm familiar with, Grokipedia has a strong slant towards fringe far right viewpoints. A lot of the references it cites would be dismissed out of hand on Wikipedia as being clearly unreliable, and in many cases those references don't even support the claim. As with many AI products, a real problem with Grokipedia is that it's very confidently written and presented, despite the underlying material being frequently questionable. Nick-D (talk) 04:01, 1 November 2025 (UTC)
- The first paragraph of Grokipedia's Jimmy Carter article claims that he won the electoral college, but lost the popular vote. Later on the same page it notes he won the popular vote by 50 to 48%. Hyperbolick (talk) 06:07, 1 November 2025 (UTC)
- I had a flick to a few of the titles on which I've written Wikipedia articles. It was a good confidence boost about my own work, at least! Interestingly, it's often saying similar things to the Wikipedia article, but changing it slightly and finding a new, lower-quality citation which often doesn't say what it claims it does. I know that AI hallucinating references is nothing new, but it's interesting that it's actively discarding good material in favour of made-up rubbish. UndercoverClassicist T·C 10:35, 1 November 2025 (UTC)
- It will also straight-up plagiarize Wikipedia. Grokipedia's "Sun in fiction" article says (at the very bottom) that
The content is adapted from Wikipedia
, but that means that it was taken wholesale from our Sun in fiction article. It even includes the "aa ab ac [...]" markup from reused references because it apparently cannot recognize that it's not part of the actual citation. The "Venus in fiction" article hilariously includes theSome early depictions of Venus in fiction were part of tours of the Solar System. Clicking on a planet leads to the article about its depiction in fiction.
caption from our Venus in fiction article, but not the actual image that the caption refers to. It also apparently cannot handle the {{multiref2}} template, so one of the citations in Grokipedia's "Venus in fiction" article appears as just plain "b" in the list of references and the inline footnote is omitted entirely. The Grokipedia "Mars in fiction" article, which at least isn't copied from Wikipedia even if it seems pretty clear that it was based upon our Mars in fiction article, conspicuously mentions Musk and SpaceX quite heavily with one of the headings being "Evolving Depictions in the SpaceX Era". Embarrassing, really. TompaDompa (talk) 12:02, 1 November 2025 (UTC)- That's not really plagiarism, then, is it? They are doing exactly what we did, and sometimes still do, with EB 1911, and what the WP terms allow, if not encourage. I had a look at their Venetian painting, which is nearly all word for word the WP article (by me), but with no links and no pics (ours has 27), just the captions. The only metadata is the result of a "fact check" 5 days ago, which lists a few minor issues found, and the adjustments made, with the sources used. Some of these might have even have a point; I'll check some time. Their Flaying_of_Marsyas_(Titian) is the same - word for word, with a few points picked on (but not yet changed). Johnbod (talk) 13:19, 1 November 2025 (UTC)
- What’s the copyright status of the now-unsupported quotes in the GrokOfShit article though. I would think they’re possibly on thin ice by not providing any sources except the ‘we stole this from Wikipedia’ boilerplate. Either way, for all that criticism of Wikipedia being such a terrible place, it’s a wonder why they decided to cut and paste so much of it. SchroCat (talk) 13:39, 1 November 2025 (UTC)
- Call it repackaging rather than plagiarizing, then. That the content was lifted, and carelessly at that, remains the case. TompaDompa (talk) 13:50, 1 November 2025 (UTC)
- Nah, I think that's still plagiarism. Wikipedia says that you're allowed to copy our text without us complaining, but that doesn't make it any less plagiarism to do so -- it's not like theft; the consent of the original writer is immaterial (hence self-plagiarism is a thing). When you write something and put your name on it, you're saying it's your work or being clear about what isn't -- that's a basic standard that anyone in education, academia, publishing etc would accept. UndercoverClassicist T·C 15:58, 1 November 2025 (UTC)
- I was just wondering how Grokipedia would look like if it introduced counterparts to FAC, WP:MOS, WP:RS, and similar policies in the subsequent iterations. MSincccc (talk) 16:33, 1 November 2025 (UTC)
- Nah, I think that's still plagiarism. Wikipedia says that you're allowed to copy our text without us complaining, but that doesn't make it any less plagiarism to do so -- it's not like theft; the consent of the original writer is immaterial (hence self-plagiarism is a thing). When you write something and put your name on it, you're saying it's your work or being clear about what isn't -- that's a basic standard that anyone in education, academia, publishing etc would accept. UndercoverClassicist T·C 15:58, 1 November 2025 (UTC)
- That's not really plagiarism, then, is it? They are doing exactly what we did, and sometimes still do, with EB 1911, and what the WP terms allow, if not encourage. I had a look at their Venetian painting, which is nearly all word for word the WP article (by me), but with no links and no pics (ours has 27), just the captions. The only metadata is the result of a "fact check" 5 days ago, which lists a few minor issues found, and the adjustments made, with the sources used. Some of these might have even have a point; I'll check some time. Their Flaying_of_Marsyas_(Titian) is the same - word for word, with a few points picked on (but not yet changed). Johnbod (talk) 13:19, 1 November 2025 (UTC)
- It will also straight-up plagiarize Wikipedia. Grokipedia's "Sun in fiction" article says (at the very bottom) that
- I had a flick to a few of the titles on which I've written Wikipedia articles. It was a good confidence boost about my own work, at least! Interestingly, it's often saying similar things to the Wikipedia article, but changing it slightly and finding a new, lower-quality citation which often doesn't say what it claims it does. I know that AI hallucinating references is nothing new, but it's interesting that it's actively discarding good material in favour of made-up rubbish. UndercoverClassicist T·C 10:35, 1 November 2025 (UTC)
- The first paragraph of Grokipedia's Jimmy Carter article claims that he won the electoral college, but lost the popular vote. Later on the same page it notes he won the popular vote by 50 to 48%. Hyperbolick (talk) 06:07, 1 November 2025 (UTC)
- Is anyone actually surprised that it’s second-rate dross? - SchroCat (talk) 13:09, 1 November 2025 (UTC)
- Another reason why we need to keep AI-generated text far away from Wikipedia; if they want that, they can go to Grokipedia. They come here for articles written by humans. FunkMonk (talk) 13:26, 1 November 2025 (UTC)
- The featured article Liz Truss, considered one of the
best articles Wikipedia has to offer
, currently uses three Daily Mail citations in its Grok counterpart. MSincccc (talk) 14:17, 1 November 2025 (UTC)- Mars in fiction is likewise a WP:Featured article, and the Grokipedia counterpart cites Reddit and Quora in a section titled "Critiques of Ideological Biases in Tropes". TompaDompa (talk) 15:37, 1 November 2025 (UTC)
- I'm quite enjoying Grok's rendition of Genghis Khan: we have Quora, Reddit, YouTube, Amazon, travel blogs, video game forums, tutoring websites, and leadership-focused lessons.Of course, the lesson we (and any Grokipedia user) should be drawing is that AI has no idea what referencing or sources mean, as many of these websites don't actually support the information they said they do. ~~ AirshipJungleman29 (talk) 16:03, 1 November 2025 (UTC)
- An obvious problem with the sources Grokipedia is allegedly drawing content from is that they are exclusively online references, with no use being made of even online versions of books in any of the articles I've looked at. Nick-D (talk) 00:29, 2 November 2025 (UTC)
- Seems like Grok has a serious case of FUTON bias. I've noticed the same issue with other AIs. Jo-Jo Eumerus (talk) 09:12, 2 November 2025 (UTC)
- It could be that they are trying to avoid the threat of litigation from excessive data-mining of copyrighted works. The problem with that is that many of their articles carry no sources at all, which means they are causing copyright infringements by using quoted material without proper identification of the source. - SchroCat (talk) 09:16, 2 November 2025 (UTC)
- I am not entirely sure what the purpose of this discussion is. If the aim is simply to reassure ourselves that we are the best, I cannot agree with that. After nearly a quarter of a century of work, it seems clear that Wikipedia is still not widely regarded as a fully respected and reliable source of knowledge. If we genuinely wish to improve, we must be willing to review and adjust our processes rather than criticise our competitors. If we remain convinced that AI is not competitive with us, we will lose the competition. Borsoka (talk) 04:07, 3 November 2025 (UTC)
- The purpose of my original post was simply to compare the quality of my own FA articles with the corresponding articles in Grokipedia. As of today, Grokipedia is inferior. I'm not bragging - I was just curious. That said, I agree with you that there's always room for improvement, and we in Wikipedia should be continually striving to improve our quality and reliability. Who knows, maybe AI will have a role in that improvement (for instance, AI could provide leads to find new sources, or improve grammar). I don't understand why you say Wikipedia is not respected ... Many google searches return WP as one of the top results; and clearly AI tools are often using Wikipedia as a source. If Grokipedia continues to function as a vehicle to promote Musk's own personal viewpoints, I don't see that it will ever be considered reliable, and it may go the way of Conservapedia. Noleander (talk) 05:26, 3 November 2025 (UTC)
- Comparing an article produced by a couple-of-days-old "baby platform" with one published on a platform that has been developing for nearly twenty-five years can hardly be considered a fair process. My concern is that each of the above comments stresses that individual articles on WP are of much higher quality than those on the new platform. I am not sure I would be particularly proud of this "success", nor would I draw any firm conclusions from it. While WP remains a useful starting point for research on specific topics, it is still not regarded as a reliable source. Frankly, after reviewing some of our FAs and FACs, I cannot say I am surprised, even though several of our FAs are of high quality. If we cannot evolve, we shall lose our position as the "starting platform" to one of our AI competitors and we can "close our business". Borsoka (talk) 05:44, 3 November 2025 (UTC)
- @Borsoka: what kind of "evolution" do you have in mind here?On one level, it would be lovely if every Wikipedia article (or at least every FA?) were considered reliable enough to cite in an academic paper, but that doesn't seem fundamentally compatible with the Wiki model, where writers and reviewers are anonymous amateurs. Citizendium tried to flip the other way, and that didn't really work as far as creating something with the role Wikipedia had; the Stanford Encyclopedia of Philosophy and similar can push the quality up by using expert contributors, but that model can't be scaled up to a comprehensive encyclopaedia of everything (and, again, is simply a different thing to what Wikipedia is).Even then, if we're saying "we need to improve the quality of our articles or we'll be replaced by Grokipedia et al", that seems (at best) premature with the current generation of AI (and arguably with any LLM-based system at all), since the Grokipedia experiment is showing that it's not generally able to produce content at the same quality as the Wikipedia (FA) process, even when giving Wikipedia FAs as a starting point. What's the comparative advantage you think AI has that we need to evolve against? UndercoverClassicist T·C 07:28, 3 November 2025 (UTC)
- One doesn't have to compare individual articles to see the weakness in products (although that is self-evident on every article I've looked at, both FA and below), but a scan of independent media shows enough examples of in how low esteem Grokipedia is held - and it's widely voiced that we (the project as a whole) come out favourably. For all Musk's whining about WP, he doesn't seem to have a problem ripping us off, but he does so in a way that degrades the content at the same time. Looking at any article that actually shows sources apart from "we copied this from WP", going for low-level blogs, fansites and reddit threads isn't going to instil any confidence in most people. All we can do as the FA part of the project is to ensure that our standards remain high, particularly in the area of sourcing. - SchroCat (talk) 07:55, 3 November 2025 (UTC)
- Once again, comparing a "baby platform" with our nearly 25-year-old project is neither fair nor a meaningful argument. Sooner or later, AI will probably outperform us in the mass production of articles. I have read several Grokipedia articles, and I must admit that many of them are comparable in quality to the average WP articles. Of course, Grokipedia's articles sometimes contain questionable claims, but I have also come across FAs and GAs that included several problematic statements. We should carefully consider whether WP's primary mission is to serve as a social workshop for people with free time or as a platform for building an encyclopedia. If we continue to operate mainly as a social workshop, WP will likely be read almost exclusively by members of our own community, who will exchange badges, smileys, and kind messages with each other. While WP as a whole may never fully reach the general reliability level of encyclopedias published by academic institutions — or, in the near future, by AI platforms — we could still become the best artisan encyclopedists on the market, if our processes effectively support quality work. I think I have said everything I wanted to say on this topic, so I will remain silent from now on. I simply prefer editing articles to having discussions. :) Borsoka (talk) 09:58, 3 November 2025 (UTC)
- What machine-generated articles still cannot achieve
- Recognise that some stylistic revisions (even when improvements) can be omitted, as differences in word choice between users are perfectly acceptable.
- Distinguish between reliable and unreliable sources unless explicitly indicated on a page such as WP:RSPSOURCES.
- Humanise text so that the flow feels natural (though they often strive hard to achieve accuracy).
- Review articles according to set guidelines rather than offering generalised statements.
- Just a few of the limitations (we know the pros) of machine-generated articles, which will take some time to overcome rather than anything changing in the immediate present. MSincccc (talk) 10:14, 3 November 2025 (UTC)
- It could certainly be argued that some specific kinds of articles have been rendered obsolete by LLMs. There is probably a subset of our list articles that we are unable to maintain to an acceptable standard that an LLM would do no worse at, for instance. This might be the case for e.g. several articles in Category:Lists of fictional characters or other lists where the primary purpose is enumerating instances of X (say, List of one-eyed horse thieves from Montana, to use the example from WP:TOOSPECIFIC). TompaDompa (talk) 19:42, 3 November 2025 (UTC)
- I suspect you could add whole swathes of articles to that too - as long as you’re not looking for anything too in-depth or meaningful. Episodes of television programmes or other bits of popular culture are probably areas where an LLM could source a lot of crap from other wikis, fan pages etc and pull together something of sub-GA standard. - SchroCat (talk) 20:06, 3 November 2025 (UTC)
- You probably have to look at the reasons why a Wikipedia article isn't great -- there are some which have lots of good sources but nobody's really got around to tapping them (I did some work on Girl with a Mandolin recently, which previously fell into that category). Those ones probably could be done better than they currently are by an LLM (or indeed a competent novice editor) -- the reason they haven't been improved yet is because of a lack of editor time and interest, and LLMs have us beat on those fronts. With that said, we've seen that at least the current suite of LLMs will struggle to use those good sources effectively, or to pick them out from the background noise of user-generated crankery on the internet.However, there's at least a big chunk of our "bad" articles that are in that state because there are reams of bad sources about them (Marian reforms until its recent overhaul), or because there simply aren't good sources to write from (any number of stubs about XYZ village in India/Peru/Russia etc). In those cases, it's difficult to see what an LLM can do about those problems -- we know what it generally will do, which is throw together rubbish sources or make them up itself, but I don't think anyone would consider either of those a good outcome. UndercoverClassicist T·C 21:32, 3 November 2025 (UTC)
- My point was mainly that perhaps Wikipedia should do away with, say, List of European advertising characters and the like. More generally, I am open to the idea that Wikipedia should remove articles that are worse than what an LLM would produce and that show no signs of surpassing that quality threshold in the near future. But then I have long been of the opinion that Wikipedia (at least the English-language edition) has reached a sufficiently mature state that the focus should be on content curation rather than content creation (article quality, not quantity). TompaDompa (talk) 21:54, 3 November 2025 (UTC)
- You probably have to look at the reasons why a Wikipedia article isn't great -- there are some which have lots of good sources but nobody's really got around to tapping them (I did some work on Girl with a Mandolin recently, which previously fell into that category). Those ones probably could be done better than they currently are by an LLM (or indeed a competent novice editor) -- the reason they haven't been improved yet is because of a lack of editor time and interest, and LLMs have us beat on those fronts. With that said, we've seen that at least the current suite of LLMs will struggle to use those good sources effectively, or to pick them out from the background noise of user-generated crankery on the internet.However, there's at least a big chunk of our "bad" articles that are in that state because there are reams of bad sources about them (Marian reforms until its recent overhaul), or because there simply aren't good sources to write from (any number of stubs about XYZ village in India/Peru/Russia etc). In those cases, it's difficult to see what an LLM can do about those problems -- we know what it generally will do, which is throw together rubbish sources or make them up itself, but I don't think anyone would consider either of those a good outcome. UndercoverClassicist T·C 21:32, 3 November 2025 (UTC)
- I suspect you could add whole swathes of articles to that too - as long as you’re not looking for anything too in-depth or meaningful. Episodes of television programmes or other bits of popular culture are probably areas where an LLM could source a lot of crap from other wikis, fan pages etc and pull together something of sub-GA standard. - SchroCat (talk) 20:06, 3 November 2025 (UTC)
- It could certainly be argued that some specific kinds of articles have been rendered obsolete by LLMs. There is probably a subset of our list articles that we are unable to maintain to an acceptable standard that an LLM would do no worse at, for instance. This might be the case for e.g. several articles in Category:Lists of fictional characters or other lists where the primary purpose is enumerating instances of X (say, List of one-eyed horse thieves from Montana, to use the example from WP:TOOSPECIFIC). TompaDompa (talk) 19:42, 3 November 2025 (UTC)
- Once again, comparing a "baby platform" with our nearly 25-year-old project is neither fair nor a meaningful argument. Sooner or later, AI will probably outperform us in the mass production of articles. I have read several Grokipedia articles, and I must admit that many of them are comparable in quality to the average WP articles. Of course, Grokipedia's articles sometimes contain questionable claims, but I have also come across FAs and GAs that included several problematic statements. We should carefully consider whether WP's primary mission is to serve as a social workshop for people with free time or as a platform for building an encyclopedia. If we continue to operate mainly as a social workshop, WP will likely be read almost exclusively by members of our own community, who will exchange badges, smileys, and kind messages with each other. While WP as a whole may never fully reach the general reliability level of encyclopedias published by academic institutions — or, in the near future, by AI platforms — we could still become the best artisan encyclopedists on the market, if our processes effectively support quality work. I think I have said everything I wanted to say on this topic, so I will remain silent from now on. I simply prefer editing articles to having discussions. :) Borsoka (talk) 09:58, 3 November 2025 (UTC)
- Comparing an article produced by a couple-of-days-old "baby platform" with one published on a platform that has been developing for nearly twenty-five years can hardly be considered a fair process. My concern is that each of the above comments stresses that individual articles on WP are of much higher quality than those on the new platform. I am not sure I would be particularly proud of this "success", nor would I draw any firm conclusions from it. While WP remains a useful starting point for research on specific topics, it is still not regarded as a reliable source. Frankly, after reviewing some of our FAs and FACs, I cannot say I am surprised, even though several of our FAs are of high quality. If we cannot evolve, we shall lose our position as the "starting platform" to one of our AI competitors and we can "close our business". Borsoka (talk) 05:44, 3 November 2025 (UTC)
- The purpose of my original post was simply to compare the quality of my own FA articles with the corresponding articles in Grokipedia. As of today, Grokipedia is inferior. I'm not bragging - I was just curious. That said, I agree with you that there's always room for improvement, and we in Wikipedia should be continually striving to improve our quality and reliability. Who knows, maybe AI will have a role in that improvement (for instance, AI could provide leads to find new sources, or improve grammar). I don't understand why you say Wikipedia is not respected ... Many google searches return WP as one of the top results; and clearly AI tools are often using Wikipedia as a source. If Grokipedia continues to function as a vehicle to promote Musk's own personal viewpoints, I don't see that it will ever be considered reliable, and it may go the way of Conservapedia. Noleander (talk) 05:26, 3 November 2025 (UTC)
- Seems like Grok has a serious case of FUTON bias. I've noticed the same issue with other AIs. Jo-Jo Eumerus (talk) 09:12, 2 November 2025 (UTC)
- An obvious problem with the sources Grokipedia is allegedly drawing content from is that they are exclusively online references, with no use being made of even online versions of books in any of the articles I've looked at. Nick-D (talk) 00:29, 2 November 2025 (UTC)
- I'm quite enjoying Grok's rendition of Genghis Khan: we have Quora, Reddit, YouTube, Amazon, travel blogs, video game forums, tutoring websites, and leadership-focused lessons.Of course, the lesson we (and any Grokipedia user) should be drawing is that AI has no idea what referencing or sources mean, as many of these websites don't actually support the information they said they do. ~~ AirshipJungleman29 (talk) 16:03, 1 November 2025 (UTC)
- Mars in fiction is likewise a WP:Featured article, and the Grokipedia counterpart cites Reddit and Quora in a section titled "Critiques of Ideological Biases in Tropes". TompaDompa (talk) 15:37, 1 November 2025 (UTC)
- I have a fear that all our work and time will be all wasted if there are any future encyclopedia or Grokipedia/AI have significantly improved and are better than Wikipedia; thus replacing the google knowledge of Wikipedia to AI. Anyway, we have very important articles here at Wikipedia that is horrible too read like Spaghetti, and possibly Grokipedia could better at it. ~2025-32224-03 (talk) 23:27, 8 November 2025 (UTC)
I had a look at tuberous sclerosis (vs our tuberous sclerosis). I thought that might be a fair example where political bias was absent and no great controversies. I haven't studied the differences in detail other than to note the footnote that it was based on the Wikipedia article. I clicked on the (see edits) button. Am I right in thinking this is AI edits to the base Wikipedia article? If so, they seem promising as an approach. Wouldn't it be useful to have AI look at every sentence in our articles and ask if that is correct, fair, and up-to-date? And if not, provide at least one citation for potential corrections? I wonder whether, even if it isn't very accurate, if it has at least a 50% positivity rate in spotting issues, then that could be a useful basis from which to go off and revise some of our sentences. And perhaps for areas where it gets it quite wrong, maybe sometimes the reason it gets it so wrong is that there are indeed bad sources out there or myths and misconceptions that we'd expect a good Wikipedia article to be boldly right about. Could that sentence then be clearer and firmer about what the accepted Truth is? Or does it, in fact, need to be balanced to offer more than one possible explanation, with pros and cons, supporters and opponents?
What I'm wondering is that, even if the resulting Grokipedia page is unfiltered dross, we can look at the AI edits and differences as a tool that a human could use for some benefit? -- Colin°Talk 09:03, 4 November 2025 (UTC)
- I wouldn't trust AI to judge whether an article is "correct" or "fair", particularly when its searches seem to be fixed on sources that are low-level crap. - SchroCat (talk) 09:28, 4 November 2025 (UTC)
- I think it could certainly have value as a flagging tool -- a bit like the various Watchlist bots that flag whether an edit is likely to be vandalism -- perhaps to post an article to a central list for review with a note like "this article doesn't give a date of birth, but XYZ sources say she was born in 1980", or "this article says he never married, but Source A says his wife was called Jane Jones". I suspect a lot of that would end up being flagged as false positives (compare the bot that flags usernames as potentially inappropriate), but it would probably also catch things worth fixing. One could certainly imagine running a bot over a page to catch typos, formatting errors, misplaced angle brackets, and so on, much like we have Citation Bot, FindArgDups and so on (again, none of which are perfect, but all of which are generally considered net positives). UndercoverClassicist T·C 09:48, 4 November 2025 (UTC)
- I already revised four of our dinosaur FAs based on it. Much less than 50% of the Grokipedia changes were useful for these three articles, but it found three clear factual mistakes, three instances were citations have been confused, and one instance where a recent paper has not yet been incorporated. The problem is that it makes egregious and unexpected mistakes (saying something isn't in the source when it actually is, etc.), so every alleged issue has to be carefully analysed and the thing can never be trusted. I just checked the dinosaur article, where it complains that some things are outdated which is true, but the new literature it suggests are not the best and most recent, and the suggested fixes are poor as well. Therefore, it requires time to analyze the potential issues. For articles I recently worked on myself and thought they are free of issues, it's useful, but for other, older articles, I wonder if it is still more effective to carefully read/copy edit the article the traditional way, so that we do not have to dive into all those false positives. So it depends, I would say. --Jens Lallensack (talk) 09:53, 4 November 2025 (UTC)
- I'm curious, how are you telling what to look at for a change? And how did you determine it was an issue of factual errors or citation jumbling? Der Wohltemperierte Fuchs talk 22:00, 6 November 2025 (UTC)
- David Fuchs, from what I saw, my interpretation is that the LLM checks for three things: 1) Does the statement contradict common knowledge? 2) Is the source likely to support the statement? 3) Are there any omissions or is the content outdated considering recent (open access) sources? If the LLM works indeed like this, it would be very similar to the human approach. In fact I believe that a careful human reviewer with a bit of background knowledge could have caught all of the issues found even without constantly comparing with sources (although the reviewers didn't caught the errors in these cases). However, most "issues" that the LLM found were non-issues; once, it stated that a statement was not supported by a source, explaining that the source was an anatomical description of the skull of the animal, while the statement was about a detail from the limbs. But in this case, the source contained that information nonetheless. I went through all of the potential issues that the LLM listed, analyzing which were actual issues and which were not. When the complaint was "the source does not support that information", it was usually obvious when citations have been confused (e.g., citing "Benson 2009" instead of "Benson 2019"), and the correct citation that the author intended to cite was usually already in the article. As for the factual mistakes it found that I could verify, those were clear errors that were also not supported by the cited sources. One of those errors was probably because the author only read the first sentence rather than the entire paragraph and then made a false assumption; in another case, it was erroneous interpolation ("this remake movie featured that dinosaur, so it must have been featured in the original movie as well"); but as for the third case of factual mistake, I have no explanation. --Jens Lallensack (talk) 12:56, 7 November 2025 (UTC)
- I had a wonderful experience asking Copilot to do a pre-check of Saxe-Goldstein hypothesis before FAC, with a focus on any mistakes I'd made with AmerE: it picked out "behavior" (as "behaviour") and said, almost verbatim, "You should change the word 'behaviour' to 'behavior', but this doesn't apply here as you've already done it." So it even caught midstream that it had come up with a false positive, and yet reported it anyway -- a nice illustration of the limitations of how these things work. I'll probably do that again next time ("help me find the typo/duplicate word/silly mistake that the first reviewer will spot in five minutes"), but won't have very high expectations. UndercoverClassicist T·C 14:41, 7 November 2025 (UTC)
- Once I tried article checking with chatgpt and it didn't produce anything meaningful either. But the Grokipedia thing did; it must do something differently. --Jens Lallensack (talk) 15:25, 7 November 2025 (UTC)
- I had a wonderful experience asking Copilot to do a pre-check of Saxe-Goldstein hypothesis before FAC, with a focus on any mistakes I'd made with AmerE: it picked out "behavior" (as "behaviour") and said, almost verbatim, "You should change the word 'behaviour' to 'behavior', but this doesn't apply here as you've already done it." So it even caught midstream that it had come up with a false positive, and yet reported it anyway -- a nice illustration of the limitations of how these things work. I'll probably do that again next time ("help me find the typo/duplicate word/silly mistake that the first reviewer will spot in five minutes"), but won't have very high expectations. UndercoverClassicist T·C 14:41, 7 November 2025 (UTC)
- David Fuchs, from what I saw, my interpretation is that the LLM checks for three things: 1) Does the statement contradict common knowledge? 2) Is the source likely to support the statement? 3) Are there any omissions or is the content outdated considering recent (open access) sources? If the LLM works indeed like this, it would be very similar to the human approach. In fact I believe that a careful human reviewer with a bit of background knowledge could have caught all of the issues found even without constantly comparing with sources (although the reviewers didn't caught the errors in these cases). However, most "issues" that the LLM found were non-issues; once, it stated that a statement was not supported by a source, explaining that the source was an anatomical description of the skull of the animal, while the statement was about a detail from the limbs. But in this case, the source contained that information nonetheless. I went through all of the potential issues that the LLM listed, analyzing which were actual issues and which were not. When the complaint was "the source does not support that information", it was usually obvious when citations have been confused (e.g., citing "Benson 2009" instead of "Benson 2019"), and the correct citation that the author intended to cite was usually already in the article. As for the factual mistakes it found that I could verify, those were clear errors that were also not supported by the cited sources. One of those errors was probably because the author only read the first sentence rather than the entire paragraph and then made a false assumption; in another case, it was erroneous interpolation ("this remake movie featured that dinosaur, so it must have been featured in the original movie as well"); but as for the third case of factual mistake, I have no explanation. --Jens Lallensack (talk) 12:56, 7 November 2025 (UTC)
- I'm curious, how are you telling what to look at for a change? And how did you determine it was an issue of factual errors or citation jumbling? Der Wohltemperierte Fuchs talk 22:00, 6 November 2025 (UTC)
- I already revised four of our dinosaur FAs based on it. Much less than 50% of the Grokipedia changes were useful for these three articles, but it found three clear factual mistakes, three instances were citations have been confused, and one instance where a recent paper has not yet been incorporated. The problem is that it makes egregious and unexpected mistakes (saying something isn't in the source when it actually is, etc.), so every alleged issue has to be carefully analysed and the thing can never be trusted. I just checked the dinosaur article, where it complains that some things are outdated which is true, but the new literature it suggests are not the best and most recent, and the suggested fixes are poor as well. Therefore, it requires time to analyze the potential issues. For articles I recently worked on myself and thought they are free of issues, it's useful, but for other, older articles, I wonder if it is still more effective to carefully read/copy edit the article the traditional way, so that we do not have to dive into all those false positives. So it depends, I would say. --Jens Lallensack (talk) 09:53, 4 November 2025 (UTC)
- I think it could certainly have value as a flagging tool -- a bit like the various Watchlist bots that flag whether an edit is likely to be vandalism -- perhaps to post an article to a central list for review with a note like "this article doesn't give a date of birth, but XYZ sources say she was born in 1980", or "this article says he never married, but Source A says his wife was called Jane Jones". I suspect a lot of that would end up being flagged as false positives (compare the bot that flags usernames as potentially inappropriate), but it would probably also catch things worth fixing. One could certainly imagine running a bot over a page to catch typos, formatting errors, misplaced angle brackets, and so on, much like we have Citation Bot, FindArgDups and so on (again, none of which are perfect, but all of which are generally considered net positives). UndercoverClassicist T·C 09:48, 4 November 2025 (UTC)
- Colin, I also looked at "See edits" in DLB and the other TS (Tuberous sclerosis v Tourette syndrome). Both FAs came through fairly intact (you can look at their Reference section to easily see the changes since they show as bare URLs), but I see some big problems in contrast to our MEDRS. 1. When SFNs or manual short notes are used, they don't bring over the full citation, so their readers will have a hard time determining what the source is. 2. You can easily see what has changed by viewing the bare URLs in their version, often primary sources, and by reading through the changes on "See edits" -- that they are adding in things like recent primary sources and press releases on medications. And news additions not rising to MEDMOS level. I haven't had time to type it up, but they did find some significant things, like one place where I used the wrong citation and citations had to be swapped, or places where I haven't caught the most recent secondary reviews, but generally, the articles are being downgraded by what, for us, are simply breaches of policy -- NOTNEWS, MEDRS, etc. (There's apparently significant new information on the heritability of DLB, that I need to look in to and update.) It's particularly concerning with medications (I've been meaning to see what they've done with Alzheimer's and the recent controversial meds). I was planning to type up a better summary (and fix what needs fixing), but life has been cruel ... just adding this brief summary as you looked at your TS, and this gives you and WT:MED more to look at. In summary, my analysis is similar to Jens Lallensack (they did find some problems), but the differences wrt MEDRS are concerning. Overall, it leaves me depressed and not wanting to keep up with these FAs, if medication misinformation can be spread via press releases, I'm left feeling ... why bother? SandyGeorgia (Talk) 11:59, 4 November 2025 (UTC)
- Contrasted to our FAs, you can see they pretty much rewrote Parkinson's disease -- an article I've tried to keep up with but I knew was in bad shape. It would be interesting to look at Former featured article Autism, as we have now a complete mess there. (Some clever person should design a template for easily referring to a Grokipedia article). SandyGeorgia (Talk) 12:15, 4 November 2025 (UTC)
- I think that over the long-term AIs to do things like comparing text to its source have potential, but I wouldn't say that the current generic LLMs can do this job. Or rather, I dunno whether they can do this job, not having done any testing. Jo-Jo Eumerus (talk) 08:28, 6 November 2025 (UTC)
- I've done ad-hoc testing. Llms can pick things up usefully. I've seen them find mismatches between a source figure and an article figure for example. The issue is that the underlying mechanism is always going to be prone to producing errors in the same way that it makes correct observations. (See all the errors in the AI edits we now get.) I would also anecdotally posit that the higher quality the Wikipedia article is, the less use llm suggestions are. Comparing Groktext to our FAs is probably going to yield much less insight than underdeveloped articles (in line with what UndercoverClassicist said), although this will still need very careful human eyes. CMD (talk) 00:42, 7 November 2025 (UTC)
RfC regarding "Make technical articles understandable" guideline revamp
[edit]See here; this guideline is quite relevant to the FAC process so I thought everybody should be aware. Jens Lallensack (talk) 12:37, 6 November 2025 (UTC)
FAC reviewing statistics and nominator reviewing table for October 2025
[edit]Here are the FAC reviewing statistics for October 2025. The tables below include all reviews for FACS that were either archived or promoted last month, so the reviews included are spread over the last two or three months. A review posted last month is not included if the FAC was still open at the end of the month. The new facstats tool has been updated with this data, but the old facstats tool has not. Mike Christie (talk - contribs - library) 12:44, 12 November 2025 (UTC)
Reviewers for October 2025
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Supports and opposes for October 2025
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The following table shows the 12-month review-to-nominations ratio for everyone who nominated an article that was promoted or archived in the last three months who has nominated more than one article in the last 12 months. The average promoted FAC receives between 7 and 8 reviews. Mike Christie (talk - contribs - library) 12:44, 12 November 2025 (UTC)
Nominators for August 2025 to October 2025 with more than one nomination in the last 12 months
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
-- Mike Christie (talk - contribs - library) 12:44, 12 November 2025 (UTC)
Rfc regarding a new nominator feedback page
[edit]See here; the proposed feedback page could also be useful for the FAC process, so I thought it would be good for everyone to be aware. MSincccc (talk) 07:28, 14 November 2025 (UTC)
RfC of potential interest
[edit]There is an RfC of potential interest to members of the FAC project regarding the addition of an IB on the George Formby FA. - SchroCat (talk) 21:37, 14 November 2025 (UTC)
Discussion at Wikipedia talk:Good article nominations § New script to help do source spot checks
[edit] You are invited to join the discussion at Wikipedia talk:Good article nominations § New script to help do source spot checks. ~~ AirshipJungleman29 (talk) 00:22, 16 November 2025 (UTC)
Discussion re. Main Page FAs
[edit] is taking place here, and may be of interest to the members of this project. —Fortuna, imperatrix 17:29, 16 November 2025 (UTC)
Where did the TOC go?
[edit]I only see a huge, uncollapsible page now, not even a TOC list. A glitch? Something on my end? FunkMonk (talk) 07:04, 18 November 2025 (UTC)
- If you just switched to the new vector layout, it is the hamburger menu next to the article title (in this case, just left to "Wikipedia:Featured article candidates"). You can click "move to sidebar" to have it permanently visible. --Jens Lallensack (talk) 07:10, 18 November 2025 (UTC)
- Ah, thanks, it seems the script that collapsed the individual nomination pages also stopped working, is there a new one for the new layout? Seems the new layout also obliterated the support/oppose counter... FunkMonk (talk) 07:19, 18 November 2025 (UTC)
- You have to reinstall Wikipedia:Nominations viewer (because your script page is specific for vector-2022), I think. It is working fine here. --Jens Lallensack (talk) 07:34, 18 November 2025 (UTC)
- Thanks, will give it a try! FunkMonk (talk) 16:08, 18 November 2025 (UTC)
- You have to reinstall Wikipedia:Nominations viewer (because your script page is specific for vector-2022), I think. It is working fine here. --Jens Lallensack (talk) 07:34, 18 November 2025 (UTC)
- Ah, thanks, it seems the script that collapsed the individual nomination pages also stopped working, is there a new one for the new layout? Seems the new layout also obliterated the support/oppose counter... FunkMonk (talk) 07:19, 18 November 2025 (UTC)
Request regarding Meg White as a FAC
[edit]Hello! I've been working on a bunch of The White Stripes articles and, earlier this year, got the Meg White article promoted to a good article. In light of the band's recent induction to the Rock and Roll Hall of Fame, I was able to add more to the article and use new sources to max out the information available. I think I've covered every important aspect (with early help from GA reviewers, too) and want to work on possibly promoting this to a featured article. Wikipedia:Mentoring for FAC said to post a request here for help. Thank you in advance! Watagwaan (talk) 13:30, 19 November 2025 (UTC)
- Do you want an editor to review the article and assess if it meets the FA criteria? Then you should post a request a WP:Peer review, and add the article to the list in Template:FAC peer review sidebar. Or, if are you ready to nominate it for FA status now, and simply want guidance on the nomination procedure, the instructions are at WP:Featured_article_candidates#Nominating Noleander (talk) 13:41, 19 November 2025 (UTC)
- Note the part in the FAC instructions recommending that first-time nominators have a mentor. Gog the Mild (talk) 16:38, 19 November 2025 (UTC)
- If I read Watagwaan's post correctly, I think it is a request for a mentor. I'll ping a few people on the mentoring list with experience in pop music and culture articles: @David Fuchs, Gen. Quon, and Masem: one of these might be a good shout to approach, or you could look through the archives and find the nominators of similar articles recently promoted. UndercoverClassicist T·C 19:10, 19 November 2025 (UTC)
- Yes, sorry if it was worded wrong! I initially reached out to Gen. Quon who helped me with GA reviews, but I also recognize that Gen. Quon is quite busy so I felt bad about pestering them in a sense. I'll contact the other two, thank you @UndercoverClassicist! Watagwaan (talk) 19:53, 19 November 2025 (UTC)
- I don't think they've put themselves on the list, but another of our co-ordinators (FrB.TG) is a bit of a celebrity themselves when it comes to pop culture articles -- they might be worth reaching out to either as a potential mentor or for a look over when the article seems to be close to nomination? UndercoverClassicist T·C 07:38, 21 November 2025 (UTC)
- LOL I'd be happy to help out. FrB.TG (talk) 10:23, 25 November 2025 (UTC)
- @Watagwaan - If the pop-culture experts listed above are not available for mentoring Meg White, I'd be happy to help you out. I'm not a pop culture expert, but I do like the beat of "Seven Nation Army". Just ping me on my Talk page and I can give you input on the article and answer any questions you have. Noleander (talk) 16:15, 21 November 2025 (UTC)
- Thank you so much! I'll respond on your talk page. Watagwaan (talk) 17:35, 21 November 2025 (UTC)
- I don't think they've put themselves on the list, but another of our co-ordinators (FrB.TG) is a bit of a celebrity themselves when it comes to pop culture articles -- they might be worth reaching out to either as a potential mentor or for a look over when the article seems to be close to nomination? UndercoverClassicist T·C 07:38, 21 November 2025 (UTC)
- Yes, sorry if it was worded wrong! I initially reached out to Gen. Quon who helped me with GA reviews, but I also recognize that Gen. Quon is quite busy so I felt bad about pestering them in a sense. I'll contact the other two, thank you @UndercoverClassicist! Watagwaan (talk) 19:53, 19 November 2025 (UTC)
- If I read Watagwaan's post correctly, I think it is a request for a mentor. I'll ping a few people on the mentoring list with experience in pop music and culture articles: @David Fuchs, Gen. Quon, and Masem: one of these might be a good shout to approach, or you could look through the archives and find the nominators of similar articles recently promoted. UndercoverClassicist T·C 19:10, 19 November 2025 (UTC)
- Note the part in the FAC instructions recommending that first-time nominators have a mentor. Gog the Mild (talk) 16:38, 19 November 2025 (UTC)
FAC peer review sidebar is getting backlogged
[edit]I've been doing some cleanup of {{FAC peer review sidebar}}. After closing out a few very old reviews that have gone stale, there's still 25-ish requests pending, some of which go back 3-4 months. There's a few from first-timers to FAC, some of whom have explicitly asked for mentorship (alas, in subject areas where I'm not competent to review). So if folks have some time, it would be a good thing to take a look at the list and see if you can help beat back the backlog. RoySmith (talk) 16:44, 30 November 2025 (UTC)
- I can do one or two peer reviews. If the first timers are fresh in your mind could you identify them here? ... if not I can figure out which ones they are. Noleander (talk) 17:23, 30 November 2025 (UTC)
- Wikipedia:Peer review/Murder of Sara Sharif/archive1 was one, and is the oldest on the list, so a good place to start. RoySmith (talk) 17:26, 30 November 2025 (UTC)
- Got it. Noleander (talk) 17:43, 30 November 2025 (UTC)
- Wikipedia:Peer review/Murder of Sara Sharif/archive1 was one, and is the oldest on the list, so a good place to start. RoySmith (talk) 17:26, 30 November 2025 (UTC)
I went through the PRs and added a couple more to the list. I encourage all editors to take a look and comment on pre-FAC PRs (and other PRs) that interest them. Z1720 (talk) 03:38, 1 December 2025 (UTC)
AI finds errors in 90% some of October's TFAs
[edit]Colleagues may find interesting this article about errors in October’s TFAs. - SchroCat (talk) 03:40, 1 December 2025 (UTC)
- Would this error-detection capability of chatGpt be a helpful tool for FA reviewers? They could apply the tool to FA nominations they are reviewing. Or, nominators could use it as a tool prior to submitting a nomination. If a chatGPT subscription is required, perhaps Wikipedia could provide single shared account that would enable editors to execute it without having to create and pay for their own subscription? Noleander (talk) 05:32, 1 December 2025 (UTC)
- Looking at the errors found, it would probably be if some use as one of the possible tools reviewers could consider. Certainly not infallible and some of the errors mentioned were not errors at the time the FA in question was written (there are a few in there where later sources have been identified), but there are certainly some simple points that the programme picked up which have been missed by the human eye. The usual caveats with anything AI-related obviously apply. - SchroCat (talk) 06:15, 1 December 2025 (UTC)
- Of course it's useful. And equally, of course it's not infalable. We have lots of valuable tools which point out possible problems for human attention. Just like as I'm typing this, I see that the word "infalable" I typed earlier has a red underline alerting me to a likely spelling error. RoySmith (talk) 12:27, 1 December 2025 (UTC)
- Looking at the errors found, it would probably be if some use as one of the possible tools reviewers could consider. Certainly not infallible and some of the errors mentioned were not errors at the time the FA in question was written (there are a few in there where later sources have been identified), but there are certainly some simple points that the programme picked up which have been missed by the human eye. The usual caveats with anything AI-related obviously apply. - SchroCat (talk) 06:15, 1 December 2025 (UTC)
- Interesting. It is, I think, not surprising that there remain errors in even the most thoroughly-reviewed FAs – if the FA nominator, who is likely to be one of Wikipedia's most expert editors on the topic, hasn't noticed an error, then unless another expert on the topic reviews the article at FAC or someone is really doing their due diligence on the spotchecking it's entirely likely that the error is subtle enough that it's going to slip through the net.
- Of the 38 errors marked by HaeB as valid, by my count 10 are explicitly listed as regarding the infobox (and at least two more – the 29 minute runtime of Deer Lady and the synonym year for the African striped weasel – are also infobox issues); eleven further errors are explicitly attributed to the article lead. So at least 23 of the 38 errors identified (>60%) are issues with the infobox or lead. Does this suggest that these sections ought be subject to more careful reviewing, or is it simply an artefact of how ChatGPT responded to the given prompt? Caeciliusinhorto-public (talk) 10:48, 1 December 2025 (UTC)
- From a nominator/reviewer point of view, I do find that too little attention is paid to infoboxes in particular, although I would think the lead was better examined. The point about the prompt response is interesting, though; HaeB, can I ask if you've looked at what happens if ChatGPT is asked for more than one error? ~~ AirshipJungleman29 (talk) 12:12, 1 December 2025 (UTC)
I do find that too little attention is paid to infoboxes in particular, although I would think the lead was better examined
I'm certainly guilty of not checking infoboxes thoroughly. Like you I would expect the lead to be one of the most-thoroughly reviewed parts of the article – though perhaps the explanation is simply that in summarising a complex topic into three or four paragraphs, errors and imprecision inevitably creep in. Caeciliusinhorto-public (talk) 16:15, 1 December 2025 (UTC)- It's not at all surprizing, even at FA level. Infoboxes are often done by different people from the article text, sometimes against the main text-writers wishes, and by people interested in adding infoboxes rather than the actual subject. Too short leads that do not properly reflect the text below are a general problem on WP, and though FAC should improve these situations, I find many leads are still neglected by reviewers. Of course in both cases you need to read the article first, then return to the top to see how well the first screen reflects the rest. Johnbod (talk) 16:31, 1 December 2025 (UTC)
It is, I think, not surprising that there remain errors in even the most thoroughly-reviewed FAs
. Of course. My own Julio and Marisol which has been on TFA for the past 12 hours, has already had a half-dozen corrections from sharp-eyed readers. I expect it'll pick up a bunch more over the rest of the day. This despite support from seven individual FAC reviewers. RoySmith (talk) 12:39, 1 December 2025 (UTC)- In mine (Georg Karo) it took two shots, of which one was a hit and one was a miss. Honestly, it's not a huge surprise -- the article is thousands of words long, juggling dozens of sources (many of which are in languages where my competence is limited) and had undergone a lot of changes at GA nom and FAC. For all the false positives, it does seem like a fairly basic AI check found a lot of errors in very heavily scrutinised articles, and that suggests to me that it's a useful tool to use in addition to the mechanisms we already have. I'll certainly be running my next "finished" article past it with a similar prompt to that used in this study. UndercoverClassicist T·C 14:55, 1 December 2025 (UTC)
- I ran Claude on Julio and Marisol. While I don't think anything it found was an outright error, it did highlight some areas where the sourcing is iffy or conflicting and thus worthy of additional research. I was impressed when I got to "Actually, upon reviewing the search results, I need to revise my assessment". RoySmith (talk) 18:21, 1 December 2025 (UTC)
- For the record, my own little contribution to those
half-dozen corrections from sharp-eyed readers
was also thanks to AI (5.1. Thinking). Regards, HaeB (talk) 09:30, 2 December 2025 (UTC)
- For the record, my own little contribution to those
- From a nominator/reviewer point of view, I do find that too little attention is paid to infoboxes in particular, although I would think the lead was better examined. The point about the prompt response is interesting, though; HaeB, can I ask if you've looked at what happens if ChatGPT is asked for more than one error? ~~ AirshipJungleman29 (talk) 12:12, 1 December 2025 (UTC)
- I looked at one article - Siege of Tunis (Mercenary War) - and then lost interest. Rather than address any of the HQ RSs used to support the text, or consider what the consensus of scholarly opinion might be, ChatGBT seems to have selected one source to rely on. A primary source which I doubt is a RS and certainly isn't HQ. (Polybius - for a summary of his reliability see Punic Wars#Primary sources.) Where something in the article cannot be matched to the 2,200 year-old primary source it is declared an "error", regardless of what HQ RS support it may have. In this case at least it seems a fine case study of how over reliance on ChatGPT can lead you astray. Gog the Mild (talk) 12:30, 1 December 2025 (UTC)
- To clarify just in case, you are talking about one of the 23% of claimed errors that were indeed rated invalid, see Wikipedia:Wikipedia_Signpost/2025-12-01/Opinion#Oct 28. I guess you are saying that this not a tool for people who let themselves be impressed too easily without double-checking (or for people who get stressed out by encountering very wrong claims, although honestly these would usually also have trouble as Wikipedia editors, especially when checking RC or their watchlist regularly).
- Regards, HaeB (talk) 09:26, 2 December 2025 (UTC)
- I've left comments concerning the "error" in Fourpence (British coin) at the Signpost. While AI is a useful tool, and I expect will be more so given time, this one seems sensationalist.--Wehwalt (talk) 13:06, 1 December 2025 (UTC)
- I ran John Fressh through it and all it moaned about was his name not having an 'e' at the end. So not particularly useful, in that particular case anyway. —Fortuna, imperatrix 15:08, 1 December 2025 (UTC)
- Which version of ChatGPT were you using? (As cautioned under Wikipedia:Wikipedia_Signpost/2025-12-01/Opinion#A few observations about this experiment and further explained on the talk page, the experiment was done with a paid ChatGPT Plus account using the "Extended Thinking" setting, where it can often spend several minutes going through many sources.)
- I just ran it myself for this article [4], and it found what I would consider two quite embarrassing typos: Misspelling William Walworth as
Wentworth
, and a jubilee extended backwards in time (Edward III's jubilee year, 1376–1367
) - both fixed now. Given that the article was just promoted to FA status without any of the reviewers noticing these, I think this example actually confirms that this can be a useful tool during FA review. Of course one can't expect perfect answers every time (as mentioned under Wikipedia:Wikipedia_Signpost/2025-12-01/Opinion#Results, my own success rate in the experiment with this prompt was 68%). - Regards, HaeB (talk) 17:51, 1 December 2025 (UTC)
- This result should be expected. FA reviews don't check every source. Even when spotchecking is done, it is only on a selection of claims/sources. A computer churning through things is always going to be able to brute force better than human reviwers. This is a tool that can improve articles if used well, but even then we don't expect FAs to be perfect. Consider the flipside to the framing: even taking all the minor nitpicks as real errors, ChatGPT found that 10% of the October TFAs were error-free! That's also probably not totally true, but taking it as so, before this experiment, what would people have guessed as the percentage of FAs without a single error? CMD (talk) 16:30, 1 December 2025 (UTC)
- I call BS on the “clear inaccuracy” of my article: 2021 World Figure Skating Championships. The point was the juxtaposition of the pairs team winning the championship while performing to Queen’s “We Are the Champions”. The article was “clearly inaccurate” because there was another song during their four minute routine? GTFO. Bgsu98 (Talk) 04:04, 2 December 2025 (UTC)
- A more robust experiment would be to actually feed an entire section’s source through these tools. In practice, they’re good at catching typos, redundancies, and the more obvious factual slips. But quite often the first few suggestions are mostly stylistic, and the nominator may well decide their original version is fine. At the end of the day, editors still need to exercise their own judgement – the tool can flag things, but it can’t replace that. MSincccc (talk) 10:37, 2 December 2025 (UTC)