Wikipedia:Education noticeboard
| Purpose of this page | Using this page | ||||||||
|---|---|---|---|---|---|---|---|---|---|
|
This page is for discussion related to student assignments and the Wikipedia Education Program. Please feel free to post, whether you're from a class, a potential class, or if you're a Wikipedia editor. Topics for this board might include:
There are other pages more appropriate for dealing with certain specific issues:
|
Managing threads If you'd like to make sure a thread does not get archived automatically after 30 days, use {{Do not archive until}} at the top of the section. Use {{User:ClueBot III/ArchiveNow}} within a section to have it archived (more or less) immediately. A brief Archives page lists them with the years in which those now inactive discussions took place.
| ||||||||
| Index 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 21, 22, 23, 24, 25 |
|
This page has archives. Sections older than 30 days may be auto-archived by ClueBot III if there are more than 4. |
Delving deep into the key aspects of WikiEd's new AI training materials that signify enduring stuff
[edit]So like everyone else above I've noticed the large amount of AI concerns with student edits. I was curious what if anything they were actually being told. Until fall 2025, the answer was "not much," if anything (at least in course dashboards I saw). But a lot has been added in the past week: an entire training module rolled out in time for fall 2025 courses, and some smaller notes in other modules as of 3 days ago.
Before I say anything critical, I'm actually really glad that Wiki Ed is thinking about and taking some action on this issue, and that the module seems to be mandatory for new classes. Hopefully it bears fruit, although maybe I'm being optimistic. Unsurprisingly I don't really agree with much of the advice, but I realize I'm swimming against the cultural tide on that one. But since the training module does ask people for feedback, I do have some:
- The elephant in the room is that the student training never takes a firm stance on whether it's OK to write articles with AI. One wrong answer to the quiz says that
[asking] an AI tool to write a new paragraph for a specific Wikipedia article using high-quality sources it finds online
is not acceptable, for obvious reasons. But the module never states this outright (I actually got the question wrong because of this, whoops), and some of the other training guidance is clearly written under the assumption that students did use AI for the writing, or at least rewriting.
- That "mismatching sources" section could also use some clarification that even if a fact is true, the source still has to mention it or else the citation isn't valid. A really common pattern I notice in student submissions is that they first "write" the article (with ChatGPT) and only then tack on some sources, which unsurprisingly poorly match the text.
- There's a section warning students about AI fabricating nonexistent sources, with a list of made-up URLs as examples. It might be a better idea to use book or study titles instead -- URLs change or break all the time, possibly even during the class, and the section kind of encourages students to just make sure the URL isn't broken, not whether it leads to the same source mentioned with the same author and date and so on.
- Here's the AI guidance on tone:
Is the text padded with 'filler language' or extra words to inflate the length? (AI tools often write like this!)
. This isn't wrong per se, but it doesn't really capture the main issue. Probably a bad idea to include specific words/phrases in there for WP:BEANS reasons, but based on the articles we're seeing, I think it would really, really benefit students to have some version of the "emphasis on symbolism and importance," "editorializing," "superficial analysis," and "promotional language" sections -- not just to avoid them, but why they're bad. A lot of students might be under the impression that this is just what good writing sounds like.
- In the guidance on plagiarism and AI:
These tools will often rephrase content while retaining the sentence and paragraph structure. While this may get past plagiarism checkers, it’s still a copyright infringement and is forbidden on Wikipedia.
This isn't wrong, necessarily, but it implies that the LLM is actually summarizing the material faithfully to the text. As we have all learned recently, that's not how it works. Given the recent ANI debacle it might also be a good place to mention that getting past ZeroGPT/etc doesn't make something acceptable either.
- Students are encouraged to use AI editing/proofreading tools unless their professor tells them not to. I think this is a very bad idea. AI copy editing tools have a tendency to sneak the synthesis slop back in, and are especially bad at NPOV -- to the point where edit summaries like "I rewrote this to have a more neutral point of view" are starting to feel like a red flag. I almost think this part could be copy-pasted in from the translation section here, as it all applies:
It might give you a translation that looks good, but actually fabricates information, mistranslates key terms, or loses important nuance and tone from the original text.
- This is more of a policy thing, but I would really like WP:LLMDISCLOSE to be incorporated.
All this being said, I'm glad to see that students are now being given something, hopefully these considerations can be taken into account. Gnomingstuff (talk) 22:08, 15 August 2025 (UTC)
- Getting down to it, whether it's stated so explicitly or not, the takeaway of any LLMs-and-Wikipedia training should be "Wikipedians hate this stuff and have no patience for anyone using LLMs on Wikipedia. If they get so much as a whiff that anyone in the class is using ChatGPT to write an article, everyone will have a bad time." There's a lot that students should learn about constructive ways to use AI, but any use of AI in a Wikipedia assignment should be regarded as a red flag a la Literature students editing medical articles, giant classes, and grading based on what sticks. — Rhododendrites talk \\ 22:35, 15 August 2025 (UTC)
- Yes, indeed. I would like to underscore @Rhododendrites's point here. Debates about whether AI is acceptable in education or on Wikipedia are basically immaterial when it comes to the question of what WikiEdu should tell students, which ought to be very clear: if a Wikipedia editor thinks someone in your class is using AI, every single one of you will come under the microscope, and no one will enjoy the experience. -- asilvering (talk) 03:11, 16 August 2025 (UTC)
- I think this is an important point -- in my teaching experience, wikipedia culture is the part students most need instruction about (since, of course, they have had no other place to learn it). A student who encounters the same kind of "strengths and weaknesses of LLMs" instructional module that they've probably had a dozen times, and then encounters a Wikipedia editor who has caught them using an LLM, is likely to feel like they are the victim of a bait-and-switch. I think it ought to be similar in tone to how we approach newbies who want to make a new article from scratch -- "you're choosing to jump in the deep end, but you're allowed to find out the hard way if you're a skilled enough swimmer for it, so here's a heads-up about the biggest reefs in the harbour." ~ L 🌸 (talk) 04:01, 16 August 2025 (UTC)
- I'll add my agreement with the comments above. It does students a disservice to add some LLM content, and then see the response. It's a lot like editing in a CTOP: not worth the hornets nest that it would disturb. Just looking at the heated discussions about LLMs all over project space right now, this is clearly a fraught issue. --Tryptofish (talk) 22:39, 16 August 2025 (UTC)
- Yeah the timing is especially bad since the semester just started amid all this. Like I said, I disagree with the whole premise of the thing but have tried to meet it halfway with suggestions and help prevent some of the most common problems as much as possible, not to mention prevent people yelling at college kids.
- Would like to hear from WikiEd on this although I realize it is Monday morning. Gnomingstuff (talk) 15:23, 18 August 2025 (UTC)
- I'll add my agreement with the comments above. It does students a disservice to add some LLM content, and then see the response. It's a lot like editing in a CTOP: not worth the hornets nest that it would disturb. Just looking at the heated discussions about LLMs all over project space right now, this is clearly a fraught issue. --Tryptofish (talk) 22:39, 16 August 2025 (UTC)
- I think this is an important point -- in my teaching experience, wikipedia culture is the part students most need instruction about (since, of course, they have had no other place to learn it). A student who encounters the same kind of "strengths and weaknesses of LLMs" instructional module that they've probably had a dozen times, and then encounters a Wikipedia editor who has caught them using an LLM, is likely to feel like they are the victim of a bait-and-switch. I think it ought to be similar in tone to how we approach newbies who want to make a new article from scratch -- "you're choosing to jump in the deep end, but you're allowed to find out the hard way if you're a skilled enough swimmer for it, so here's a heads-up about the biggest reefs in the harbour." ~ L 🌸 (talk) 04:01, 16 August 2025 (UTC)
- Yes, indeed. I would like to underscore @Rhododendrites's point here. Debates about whether AI is acceptable in education or on Wikipedia are basically immaterial when it comes to the question of what WikiEdu should tell students, which ought to be very clear: if a Wikipedia editor thinks someone in your class is using AI, every single one of you will come under the microscope, and no one will enjoy the experience. -- asilvering (talk) 03:11, 16 August 2025 (UTC)
- Getting down to it, whether it's stated so explicitly or not, the takeaway of any LLMs-and-Wikipedia training should be "Wikipedians hate this stuff and have no patience for anyone using LLMs on Wikipedia. If they get so much as a whiff that anyone in the class is using ChatGPT to write an article, everyone will have a bad time." There's a lot that students should learn about constructive ways to use AI, but any use of AI in a Wikipedia assignment should be regarded as a red flag a la Literature students editing medical articles, giant classes, and grading based on what sticks. — Rhododendrites talk \\ 22:35, 15 August 2025 (UTC)
- "A really common pattern I notice in student submissions is that they first "write" the article (with ChatGPT) and only then tack on some sources, which unsurprisingly poorly match the text." To be fair to the students, this is not new to llms, this happens even with hand-written articles. Any guidance on writing articles would benefit from switching to a source-first framework from the get go.On using AI for editing/proofreading, I have consistently experimented with this over the past few months. It is good at catching typos, but not consistent in catching all typos. Its grammar suggestions are sometimes okay, sometimes poor, but this may also depend on the initial prose quality. The biggest concern is that an llm will 'offer' to rewrite the text itself incorporating the changes it suggests, which goes beyond what traditional proofreading tools would do (usually one by one manual checking), and is where further issues can creep in (the sneak the synthesis slop back in step mentioned above). These potential pitfalls compared to 'traditional' proof-checking software are not going to be understood by all students. CMD (talk) 15:40, 18 August 2025 (UTC)
- Hi all, thanks for checking out the new training and the added content to the existing modules, and providing feedback, much appreciated. As Gnomingstuff noted, as Wikipedia policy and university (and other) adoption of generative AI tools evolve, we wanted to give students clearer guidance. Most of the classes we support will not begin for another couple of weeks, and many don't start the Wikipedia assignment right away. We published the training and have distributed it around to several instructors to collect their feedback on what we should change, so we'll be publishing some edits to it next week based on their input, as well as feedback you all are providing here, before students actually take it. So consider this a draft that is open to input!
- Some background: We've anecdotally heard many times over the year that our plagiarism module was the first time students actually understood that they couldn't just copy and paste something and then change a few words; we've learned to never assume students' prior instruction in something like this was accurate. So our goal here is to provide explanations, guidance, and guardrails to the students who mean well but don't understand generative AI.
- Given that, we wanted to emphasize ways you could use gen AI that aren't to generate text, as we agree with the general sentiment expressed here that students aren't experienced enough to use LLM output responsibly on Wikipedia, so we want to discourage that. Your feedback here has made me realize we need to make it even more clear that students should not use it to generate text that they then incorporate into Wikipedia, so we'll make some updates next week based on that feedback.
- We'll also make some other smaller edits based on the feedback from Gnomingstuff's list of bullet points. Specifically with the feedback from Gnomingstuff and Chipmunkdavis about the copyediting, this is a really important point — I personally haven't seen gen AI tools add new text when you ask it to copyedit, but it's good to know that happens. We'll need to discuss what to do with this suggestion internally; we had seen this as being really helpful for students who are English language learners, who have good research skills but whose English isn't perfect. In the past, otherwise well-researched but poorly worded contributions have been reverted, so our hope was gen AI could help those students. @Chipmunkdavis:, have you find a tool that is better at this than others? We definitely want to still find ways to help these students add more grammatically correct information to Wikipedia, but we obviously don't want them to add hallucinated information!
- In sum, we certainly appreciate and welcome feedback, and look for edits to these slides coming at some point next week. We definitely want this module to be something useful not only for our students, but potentially for other new editors to Wikipedia who may also be using generative AI, given its widespread adoption across the world. --LiAnna (Wiki Ed) (talk) 00:39, 19 August 2025 (UTC)
- @LiAnna (Wiki Ed), from my own experience teaching English-learners: if you can convince them not to use LLMs to check their grammar, I promise you will be doing them a real service. Students that rely on these tools when their language skills are really insufficient don't end up learning anything much from the experience, and then they crash and burn on assessments where they don't have the option (eg, exams), completely without warning. Meanwhile, students whose language skills are mostly ok but whose confidence is lacking... it's heartbreaking stuff. They internalize that the computer is better at writing than they are and do learn from it - to their detriment. They lose their own voices and start writing like AI. Soul-crushing. -- asilvering (talk) 02:07, 19 August 2025 (UTC)
- It's hard to know what tool is better than others, especially as they change when updating. One thing I have found that works is to very clearly and specifically tell the llm "Please output the list of typos and suggestions in a bulleted list, do not edit the original text" or similar, and then dealing with these bullets is much more like a traditional spellcheck and manually following the bullets means it is extremely unlikely hallucinated information gets in (because hallucination is less likely in short text, and because students would have to miss it as they are reading and writing). Having such a bulleted list may also help with the concerns Asilvering has, as fixing the typos and grammar themselves is likely better for learning. I would add that I agree with Asilvering that llms do take away a voice, especially in the grammar suggestions. I find many of its grammar suggestions to be unnecessary (although only a small minority outright wrong), and it is these unnecessary suggestions which remove individuality. I don't think llms can somehow distinguish good and mediocre grammar suggestions, although they do seem to understand the difference between typos and grammar suggestions so you can get those on separate lists. CMD (talk) 04:17, 19 August 2025 (UTC)
- Did some more personal testing on checking, and it is really worth emphasising to students that llms can be wrong even on matters of grammar. ChatGPT just told me “opening of the Porgera Gold Mine” is inconsistent with Ok Tedi Mine (capitalised “Mine”)., which as you can see is clearly going for a certain pattern, but falls flat on its face for this actually consistent example. CMD (talk) 08:26, 19 August 2025 (UTC)
- Thanks for the reply and the update on the schedule -- I was going by the syllabi. Probably good that there will be a few weeks to test.
- I've definitely seen "copy edits" that insert new content, both here and elsewhere. I'm not sure whether it's a case of LLMs being just fundamentally unable to stick to copy editing, commercial AI tools blurring functionality together -- a lot of them advertise like 5 different ways to edit -- or people just forgetting/not being completely open about what they prompted. (Not necessarily maliciously, I feel like anyone who's done editing has found themselves starting to make way more changes than they planned.) Even the minor copyedits tend to nudge the text toward non-neutral point of view, at least often enough that you can search for various promotional-type phrases and find some. Gnomingstuff (talk) 05:22, 19 August 2025 (UTC)
- It's also that "copy editing" has varying meanings. Personally, I wish we avoided the use of the term on Wikipedia entirely, because of this kind of confusion. -- asilvering (talk) 17:21, 19 August 2025 (UTC)
- Here's a good example of the kind of thing I mean. According to the editor's talk page this is from Grammarly (which uses LLMs), I have no real reason to disbelieve that. Gnomingstuff (talk) 17:30, 19 August 2025 (UTC)
- Yikes! This is super helpful, everyone, thank you. I'll follow up next week once we've made the edits to the modules. --LiAnna (Wiki Ed) (talk) 21:00, 19 August 2025 (UTC)
- Here's a good example of the kind of thing I mean. According to the editor's talk page this is from Grammarly (which uses LLMs), I have no real reason to disbelieve that. Gnomingstuff (talk) 17:30, 19 August 2025 (UTC)
- It's also that "copy editing" has varying meanings. Personally, I wish we avoided the use of the term on Wikipedia entirely, because of this kind of confusion. -- asilvering (talk) 17:21, 19 August 2025 (UTC)
- Discouraging generating new text for inclusion is good, but honestly I think use of an LLM to directly modify any text for inclusion should also be plainly discouraged. These predictive models cannot have their output be reliably constrained to certain tasks. Even simple prompts to only correct grammar or spelling errors can negatively alter tone or introduce hallucinations. fifteen thousand two hundred twenty four (talk) 20:53, 25 August 2025 (UTC)
- Hi all, just to follow up here. Based on feedback we received from you all, as well as from various instructors and advisory committee members, we've made a pretty substantive revision to the module. In particular, based on the feedback here, we've more strongly discouraged copying and pasting content from an AI chatbot into Wikipedia, and we've removed the copyediting suggestion. You're welcome to review the new version of the module here: https://dashboard.wikiedu.org/training/students/generative-ai
- We appreciate all the feedback, and we'll be closely monitoring student work this term to see what adjustments we need to make to the module in the future. --LiAnna (Wiki Ed) (talk) 18:56, 26 August 2025 (UTC)
- I think this is looking very nice! Thanks for providing the update.
- One point that probably doesn't fit into the training but that I see students run afoul of regularly: it's much better to write three sentences strongly backed up by the cited source than paragraphs of summary loosely linked to the identified sources. This is a common gotcha because it's the opposite of the writing most students are used to doing. Using LLMs, I've seen students turn what could have been a perfectly reasonable stub into a giant mess that needs to be completely blown up. The training's suggested use cases all point toward sourcing – which is great – but I wonder if there's an opportunity there to underscore that a longer Wikipedia article is not necessarily a better one (and is often worse than a more concise summary)! Cheers, Suriname0 (talk) 22:23, 26 August 2025 (UTC)
- Community policy on AI is, inevitably, changing very rapidly. I want to note (in part because I'm making the proposal) that it's looking likely that we will have a new policy in the near future that is based on WP:LLMDISCLOSE, that would make it mandatory that users, including student editors, disclose when they are using LLMs. --Tryptofish (talk) 22:42, 26 August 2025 (UTC)
- Thanks for the update, I appreciate you incorporating the suggestions. Gnomingstuff (talk) 05:10, 28 August 2025 (UTC)
- It probably needs to be smack dab in front that using AI after agreeing not to is potentially sanctionable as academic dishonesty by universities, up to and including expulsion.
- Seconding that WikiEdu needs to not recommend ways that people can use AI at all. There have been real, long-running issues with the fact that Wikiedu students simply don't actually care about WP in many places, rather they are just "in it for the grade". We don't need to mix what is already occasionally detrimental with a weaponized bullshit machine that's going to make it more challenging for editors to prevent slop from filtering in to the project.
- If WikiEdu intends on providing guidance for the use of LLMs I'd really like a link to where consensus was established that the degree of use they're calling acceptable is considered so by the community. I don't think @LiAnna (Wiki Ed) et al. should be telling us what they're going to do anyways and asking us how to improve it when there's strong evidence that wikipedians reject the very thing they're trying to teach people to do on Wikipedia. WikiEdu doesn't have a module on how to write great personal attacks, for example. 77.250.143.134 (talk) 07:47, 28 August 2025 (UTC)
- "In this module, you'll explore how generative AI tools can support your editing work on Wikipedia when used thoughtfully, critically, and in line with the policies of Wikipedia and your course instructor." I'll shorten that module for you: they can't. Outsourcing one's thinking to the slop machine is, practically by definition, not "thoughtful". When you depend on a machine to do your writing, you don't learn how to write. Any student whose course instructor permits the use of generative AI is entitled to ask for their money back. Stepwise Continuous Dysfunction (talk) 01:51, 27 August 2025 (UTC)
- The module as it is written currently is more nuanced than "depending on a machine to do your writing". The three (two and a half) suggestions the course makes are: to use models to help identify potential knowledge gaps, to locate new sources for investigation, and to locate known sources for acquisition.
- That said, the course is written with a base assumption that model use can be helpful (or good) at all, which is not a universally held view on Wikipedia by any means.
- Even some "successful" applications of a model can be viewed as detrimental overall. Every edit made because a model surfaced gaps in information is an edit that steers the encyclopedia further away from a collection of information curated by what human judgement has found to be important, to a collection of information that a model has found important. A human agreeing with the returned information doesn't rectify the problem, they'll never know what the model didn't output or failed to give prominence to. fifteen thousand two hundred twenty four (talk) 02:43, 27 August 2025 (UTC)
- Yeah, since not a day goes by that we don't have to delete a page for having nonsensical, fabricated sources, the idea that the slop machines can be useful for research is ... unfounded. The very best case is that people who already know what they're doing can fix all the problems. But we're not talking about people who have already developed research and writing skills: students are still learning how to do all that. (An essay by some long-term Wikipedia editors suggests that no instructor make editing Wikipedia a course assignment at all until the instructor themselves has enough experience to know how writing here differs from what's expected elsewhere. Pretending that there's an easy technological fix to improve student writing is just going to amplify all the problems with it, and intensify the feeling described in that essay that nothing good comes from such classes.) And that's not even touching every other problem, like the little fact that Wikipedia probably shouldn't be endorsing the plan that schools adopt technology that will sext their students and/or encourage them to commit suicide. Stepwise Continuous Dysfunction (talk) 04:07, 27 August 2025 (UTC)
- The only message that Wikipedia and WikiEd should be giving student editors about LLMs is "never use them here". --Tryptofish (talk) 17:23, 27 August 2025 (UTC)
Even some "successful" applications of a model can be viewed as detrimental overall. Every edit made because a model surfaced gaps in information is an edit that steers the encyclopedia further away from a collection of information curated by what human judgement has found to be important, to a collection of information that a model has found important.
I think this is a really important point and would be especially critical when it comes to the tendency for LLMs to try to "balance" content on contentious topics (in the more general sense than just CTOPs). Treating a minority position as if it deserves equal space to the majority, or as if it can be used to "rebut" the majority, is unacceptable. I also do not trust LLMs to understand what RS means, or what is secondary or independent, or the concept of BALASP; a company's wiki page will certainly have a "knowledge gap" that is filled by its own website (and press releases), that doesn't mean any of it belongs in the article. I could even see this being harmful for completely uncontroversial, non-PROMO subjects, e.g., a scientist BLP where there is some easily-accessible primary media coverage of some high school sports achievements (which may also be submitted to local newspapers by the school/parents) but which is dwarfed by secondary, independent discussion of their work in paywalled scholarly articles. I would just flatly prohibit using LLMs in any way beyond accessing known sources for BLPs (or, frankly, anything...). JoelleJay (talk) 23:39, 27 August 2025 (UTC)- To be fair the guide does mention this:
Using an AI tool to search for gaps can be helpful, but only after you have researched the topic thoroughly yourself and are confident you know what’s missing from the article.
Whether a student is the best judge of what's missing is a whole other issue, but it's not telling them to use a LLM to identify gaps without their input. Gnomingstuff (talk) 05:45, 28 August 2025 (UTC)- What's the point of using AI to search for gaps if you already know what's missing from the article? JoelleJay (talk) 06:16, 28 August 2025 (UTC)
- There's still the issue of the model surfacing only whatever is it predisposed to. This inherently will shape the article to be more model and less human aligned, and when applied at scale the same will apply to Wikipedia also. If you'll allow me some abstraction:
- Imagine an article is missing information "A" and information "B", a human reviewer has not noticed either gap, but does prompt a model for what information to consider adding. The model responds with "A", and the human reviewer agrees and makes an edit to add it. This looks like a good outcome, there is more relevant information for our readers! But in this hypothetical there is a bias in the model and sequences like "A" are always emphasised and sequences like "B" de-emphasised. Apply this model to find gaps in information across multiple articles and the encyclopedia will slowly bias towards "A" and away from "B".
- It may seem absurd, but I do think that as LLM use continues to grow, this kind of larger and indirect slow-drift bias is a genuine concern. So seeing a WikiEd course endorsing LLM use in any form where it can exert any influence over what content is included is disconcerting. fifteen thousand two hundred twenty four (talk) 06:49, 28 August 2025 (UTC)
- I agree with you, just to be clear. Just wanted to clarify the actual text of the module. Gnomingstuff (talk) 12:56, 28 August 2025 (UTC)
- Yeah I'm upgrading my recommendation to "strongly oppose any application of LLMs for content suggestions". WikiEd students should never be entrusted with this; they're bad enough at/disinterested in properly evaluating BALASP on their own. JoelleJay (talk) 17:46, 28 August 2025 (UTC)
- That line from the guide is hilarious, in a bad way. It amounts to saying,
Using an AI tool to search for gaps can be helpful, if you have already done all the work and know where the gaps are.
And that's just a long-winded way of saying,AI tools are not useful.
- The point of a WikiEd course is to help students become better writers and to benefit the world at large by improving the encyclopedia. The last thing that WikiEd should support is the use of a "tool" that impedes the students' skill development while also generating misinformation. Stepwise Continuous Dysfunction (talk) 18:36, 28 August 2025 (UTC)
- Yeah, since not a day goes by that we don't have to delete a page for having nonsensical, fabricated sources, the idea that the slop machines can be useful for research is ... unfounded. The very best case is that people who already know what they're doing can fix all the problems. But we're not talking about people who have already developed research and writing skills: students are still learning how to do all that. (An essay by some long-term Wikipedia editors suggests that no instructor make editing Wikipedia a course assignment at all until the instructor themselves has enough experience to know how writing here differs from what's expected elsewhere. Pretending that there's an easy technological fix to improve student writing is just going to amplify all the problems with it, and intensify the feeling described in that essay that nothing good comes from such classes.) And that's not even touching every other problem, like the little fact that Wikipedia probably shouldn't be endorsing the plan that schools adopt technology that will sext their students and/or encourage them to commit suicide. Stepwise Continuous Dysfunction (talk) 04:07, 27 August 2025 (UTC)
Possible Future Class
[edit]I just notified my teacher that this program exists. They were very interested in it. -Flower
English IV class fyi. 24.155.147.109 (talk) 20:11, 3 September 2025 (UTC)
- FYI the WikiEd program is only for college and university classes and not high school classes. Good luck, though. wizzito | say hello! 20:57, 9 September 2025 (UTC)
- (just saw because IP Addr) oh, sad. I didn't see that anywhere when researching it. 24.155.147.109 (talk) 17:59, 16 September 2025 (UTC)
General Inquiry
[edit]As far as I can see, the Wikipedia Education program is still active, but I was surprised to find that the project is listed on Meta as closed. The edits were made by an unregistered user in 2021 and have not been restored. We don't know if this is vandalism or the project was closed. Ibrahim.ID ✪ 00:40, 7 September 2025 (UTC)
- Hi Ibrahim.ID! The Wikimedia Foundation's education team no longer exists, so it's possible that's what the user was trying to convey -- they are no longer updating those pages on Meta. Instead, much of the education work is centered around the m:Wikipedia & Education User Group, and individual education programs are active in many countries. Those pages on Meta are no longer maintained, so feel free to put in a better template if that makes sense. --LiAnna (Wiki Ed) (talk) 22:33, 9 September 2025 (UTC)
St. Thomas University
[edit]Since roughly October 2022, there have been a lot of newly registered users editing the article on St. Thomas University (Canada) to add promotional content about the university. Per Talk:St. Thomas University (Canada)#Policies and Reports, at least one of these groups claimed to be editing for a class. This behavior has started up again on two accounts, one of which was blocked for a username violation. Currently trying to find out if this account or account(s) is doing so for a class as well. wizzito | say hello! 20:47, 9 September 2025 (UTC)
- Based on the username of one of last year's accounts (ENGL1233kyra), the course in question might be ENGL 1233, course title "Digital Literacy" at St. Thomas University. That course is indeed running this semester, but no professor is openly listed. https://www.stu.ca/english/current-courses/ wizzito | say hello! 20:52, 9 September 2025 (UTC)
- Thanks! We'll try to reach out and see if we can find out anything. --LiAnna (Wiki Ed) (talk) 22:35, 9 September 2025 (UTC)