Skip to main content
26 events
when toggle format what by license comment
Jul 29, 2023 at 10:52 history edited Russell McMahon CC BY-SA 4.0
deleted 2 characters in body
Jul 27, 2023 at 14:52 history edited This_is_NOT_a_forum CC BY-SA 4.0
Active reading [<https://en.wikipedia.org/wiki/ChatGPT>]. Expanded. Added some context. Used more standard formatting (we have italics and bold on this platform).
Jul 17, 2023 at 6:31 history edited tripleee CC BY-SA 4.0
English + formatting
Jun 27, 2023 at 20:39 comment added HippoMan @NoDataDumpNoContribution: yes, ChatGPT can be useful, but it has no way of filtering out false information from its search results. Relying on ChatGPT or other LLM-based tools to verify the integrity of other text is a mis-use of those kinds of tools, due to the lack of safeguards against false positives and false negatives.
Jun 27, 2023 at 9:50 comment added NoDataDumpNoContribution @HippoMan ChatGPT is simply a tool, mostly a search tool. You have to check all the output but the suggestions by it can be helpful. Chat GPT helped me. I'm using it. I'm just not writing my contributions to the network with it. But at some point I can imagine that it will help me, but I will still check everything from it
Jun 25, 2023 at 6:59 comment added HippoMan @RussellMcMahon: If by "Used with due diligence" you mean something like, "There are lots of things that ChatGPT should absolutely not be used for, and one should use due diligence to make sure that the software is never utilized for any of those purposes," then I would agree with you. In addition to not attempting to use it to try to solve math problems, other things which ChatGPT should absolutely not be used for are to try to distinguish fact from fantasy, and to try to distinguish truth from falsehood.
Jun 25, 2023 at 5:34 comment added Russell McMahon @HippoMan Used with due diligence CGPT is a superb and useful tool. As a metaphor, it's like a double ended Katana with a mid grip and no hand guards. You can cut yourself as easily as your opponent without training and consant care. Or a pair of Nunchuks in other than expert hands :-).
Jun 24, 2023 at 19:30 comment added HippoMan I agree, @KarlKnechtel. Another of many examples: I asked ChatGPT how many times I need to take the square root of an an adult's age before the result is less than 1. ChatGPT replied that as long as the age is a positive integer, only two square roots will be needed to yield an answer less than one. And I asked the same question again, and this time, I got of lot of totally incorrect gibberish about logarithms. Relying on a tool like ChatGPT to provide correct, useful, reliable answers to anything is an exercise in folly.
Jun 24, 2023 at 14:05 comment added Russell McMahon @S.L.Barthsupportsmodstrike Yes. I'd not seen that done, but it's obvious in retrospect. || In my example I've looked at the pckages suggested and am happy that they are legitimate. (My happiness may or may not correlate well with rality :-) ).
Jun 21, 2023 at 13:17 comment added Russell McMahon @KarlKnechtel Indeed. It may not work. BUT I started programming in FORTRAN in 1969, and in machine code (not assembler) on a NatSemi SC/MP microprocessor in 1976,. I often dwelt in the darkness of embedded systems and assembler (!!!) (6800, 8080, Z80, 6502, AVR, PIC, ... but various other languages and systems have happened along the way. I'm 72 :-). The code and the added packages seemed sound and logical. I've never used Python, but reading the code it makes sense. It MAY have faults I've missed. tbd.
Jun 21, 2023 at 13:06 comment added Karl Knechtel @RussellMcMahon "I have not yet tried it but it looks sound. I've never used Python." - yes; that's the exact problem. ChatGPT has never used Python either, and doesn't actually have any idea whether the code is sound, no matter how strongly it might "profess confidence" (generate text that represents such claims) in the code. It doesn't have ideas at all. It only has an extremely sophisticated model of what words are likely to follow what other words, taking a rather large amount of context into account, using much more sophisticated algorithms than older attempts at AI.
Jun 21, 2023 at 13:03 comment added S.L. Barth is on codidact.com @RussellMcMahon And when running that code, it may import packages made by blackhats. twitter.com/llm_sec/status/1667573374426701824 (Found via Jon Ericson's tweets).
Jun 21, 2023 at 13:02 history edited Russell McMahon CC BY-SA 4.0
Rollback and fix some relevant areas. Prior edit had too many irrelevant changes.
Jun 21, 2023 at 12:49 history rollback Russell McMahon
Rollback to Revision 2
Jun 21, 2023 at 12:27 comment added Russell McMahon @KarlKnechtel I wanted to load photos to Facebook so that they appeared in the album in date order. I can do that manually by uploading them one at a time. I asked CHATGPT 3.5 how to do this. I described the task clearly, told it what works and what the issues were otherwise. It provided a Python program plus links to two necessary downloads from elsewhere. I have not yet tried it but it looks sound. I've never used Python.
Jun 20, 2023 at 12:44 comment added ColleenV @KarlKnechtel I've said it before--we haven't made the breakthrough that leads to the scary kind of AI capable of self improvement (yet). The generative stuff can be a great tool, but people underestimate the amount of human labor it takes to get it there. AI Is a Lot of Work (Article from The Verge) I don't think people understand how much subsistence wage labor these systems are built upon.
Jun 20, 2023 at 1:52 comment added Karl Knechtel And even then, these tools are fundamentally unsuited for tasks that require actual programming-like problem solving - just as Copilot cannot write your program for you (if it could, it would have already taken over the world by now, or at least eliminated the overwhelming majority of existing programming jobs).
Jun 20, 2023 at 1:50 comment added Karl Knechtel @ColleenV Fundamentally, this style of generative AI will never be able to improve content by being trained on the overall corpus of the existing content. By design, it will generate output that mimics what is already there. The public ChatGPT has, to my understanding, already been fed with the entirety of SO (as of somewhere in 2021?) among many other sources. And as you say, such a model cannot necessarily be re-trained by feeding it a subset of SO selected by any simple heuristic (such as post score). It would need a fundamentally different kind of AI to filter the training data.
Jun 19, 2023 at 12:38 comment added ColleenV AI is a useful tool in the hands of someone willing to invest some time to understand it and its limitations. For example, I could it to generate example sentences to illustrate a particular usage for my ELL answers. It is not useful (yet) as a general tool the way SE is experimenting with it. It requires a LOT of work to get a model that encodes a good post in SE terms. Right now, it's adding "Thanks in advance" type signatures and other undesired but prevalent content. There are many highly scored posts on SO that are not good exemplars for new users.
Jun 19, 2023 at 11:07 comment added Russell McMahon @KarlKnechtel I've added a comment re AI -improved questions. It is pssible that they see the genuine advantages in that area and are blinding themselves to the fact that Ai generted answers are net-negative for quality except when vetted expertly.
Jun 19, 2023 at 10:57 history edited tripleee CC BY-SA 4.0
English + formatting
Jun 19, 2023 at 10:47 history edited Russell McMahon CC BY-SA 4.0
added 1420 characters in body
Jun 19, 2023 at 9:21 comment added Karl Knechtel @starball just because they agreed with the community to allow moderators to implement the ban (initially) and to publish that policy document, doesn't mean they've actually considered the implications of the underlying argument, nor that they agree.
Jun 19, 2023 at 8:47 comment added Carl Because SE is no longer close to the users in any realistic sense, the layers of narcissistic management that march to the latest admin fad; ESG gibberish, is complete with the takeover by Prosus. This may end badly. Hard to predict.
Jun 18, 2023 at 20:41 comment added starball re: "By allowing essentially uncontrolled use the company will destroy the integrity of its core asset." - sadly and bafflingly, they seem to already be aware of this: stackoverflow.com/help/gpt-policy
Jun 18, 2023 at 13:54 history answered Russell McMahon CC BY-SA 4.0