Extracting information from databases requires skill and knowledge of query languages like SQL (Structured Query Language).
Natural language querying (NLQ) which is also called Text-to-SQL allows formation of information retrieval questions without knowledge of database-specific languages. ChatGPT is the next level in this area.
Data analyst Marie Truong challenged ChatGPT to write better SQL in January. She is comparing valid SQL syntax, query structure, correct results, and runtime. The result? ChatGPT got 50% of it wrong.
One of many new articles about this topic tries to convert database schemas into natural language as part of chatgpt prompts to provide more precise outcomes:
There has been an expectation that ChatGPT could assist in creating database queries, just as it can assist in creating computer programs. However, creating database queries requires an understanding of the database itself, and there is no conventional way to represent database semantics.
We present a solution to this problem by developing a set of syntax that can represent database semantics, such as table structure and relationships, in natural language. This allows for the creation of semantic representations of databases that can be understood by ChatGPT and enable it to perform database management tasks.
Experiment 2:
A SQL query needs to properly join four tables: careplans, providers, patients, and encounters. The encounters table plays a critical role here as it connects the patients table with the careplans table. This information is typically contained in an Entity Relationship schema.ChatGPT has successfully generated the query that results in a correct view.
There has been a research about generating SQL queries from natural language since 2016.
(Score: 5, Funny) by Rosco P. Coltrane on Monday March 20, @03:57AM (2 children)
DROP DATABASE Azure;
(Score: 2) by DannyB on Monday March 20, @02:53PM (1 child)
On a SQL UPDATE statement, please not to be forgettin' to suffix it with a WHERE claws!
Just a helpful hint.
How often should I have my memory checked? I used to know but...
(Score: 2) by Freeman on Monday March 20, @06:21PM
Fixed it!
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 5, Informative) by Anonymous Coward on Monday March 20, @05:06AM (1 child)
ChatGPT is OK for writing fiction/boilerplate when you're too lazy to come up with it on your own.
And maybe for rewriting homework so that you don't get caught for plagiarism.
(Score: 0) by Anonymous Coward on Monday March 20, @02:02PM
It's possible it can learn, but from what I've seen it is indeed long-winded boilerplate of the type I've sat through thousands of hours of in meetings with managers. There's something to that...
Let's hope that over time, with more experience of endless robotic chat, that the bloated monologues we presently seem to regard as "intelligent" - e.g. 120 minute podcasts, 20 minute Youtube videos, shitty shitty shitty formulaic articles, compulsory comedy "monologues" - train people to favor quick, realtime, information dense, original interactions.
(Score: 3, Interesting) by looorg on Monday March 20, @06:42AM
So ChatGPT got it wrong a couple of times but with some hand holding it could complete the tasks. Just as she got it wrong but managed to find what she did wrong and fix it. So perhaps it's just the queries/questions or experiments that she asked ChatGPT that needs some refining to produce better results. So who is the problem here? The bot that doesn't understand what she is asking or is the fault with her for not expressing herself or asking the question in the proper manner?
It really doesn't. Artificial intelligence is not intelligence. It doesn't understand. So it clearly can't understand the questions but just checks for the keywords and then tries to cobble something together. It doesn't know any better.
Also it, ChatGPT, apparently wrote horrific code. It wrote it fast. But it was running slow. No surprise there. It's almost WYSIWYG-code of ages ago where it just copy-paste stuff and cobble things together. No optimization or thinking at all. I'm sure at least this aspect of it can improve, or should improve if you tell it that it was wrong and what is correct. It should improve. Learning and all.
I guess the thing is you need to know what you are doing, SQL in this case, to be able to make sure that ChatGPT gets it right to. You can't just trust it to produce proper results.
https://leetcode.com/problem-list/leetcode-curated-sql-70/ [leetcode.com]
So perhaps it will do better if you train it on solving more SQL queries. So the question is perhaps is it better to learn SQL or is it better to learn to interact with ChatGPT (and other bots) so you can be proficient in that and making them to the work for you. Learn to ask it better questions in a way that it "understands" them. Which is perhaps a skill for the future instead of learning SQL.
Also why didn't she do any of the "hard" questions? Perhaps it is for the best. Not that I would have done any better then her at all. I'm not that good at SQL, I can cobble things together when I need to. It won't be fast. It won't be pretty. But it will eventually work. That said writing queries by hand is not a common task for me any more. There are tools for doing it now. Not as good as writing it yourself, it never is. But good enough.
(Score: 2) by fraxinus-tree on Monday March 20, @10:02AM
I know few data analysts. I generally make living fixing these peoples' mess.
(Score: 3, Interesting) by bradley13 on Monday March 20, @11:31AM (1 child)
All of her challenges apparently involved a single table. I'm no database expert, but in the time when I was working with ERP software, I wrote quite a few queries. Show me ChatGPT's work, when it has to deal with foreign keys, null values and subqueries.
I am seriously impressed with where AI systems are. When they are producing textual answers to queries, absolutely no one cares if there is the odd, small error. In fact, the grammer, word choice and punctuation are better than most people produce.
When producing code, small errors are just as bad as big ones. Code must be correct, or it doesn't work. Using ChatGPT interctively to get ideas how to code something - maybe that makes sense. It also can regurgitate small snippets for specific tasks. How to hash a password? Sure, it's seen that code, and can reproduce it. However, these systems are a long way from producing producing useful code for non-trivial tasks. Good programmers won't need to worry about their jobs anytime soon...
Everyone is somebody else's weirdo.
(Score: 4, Interesting) by bzipitidoo on Monday March 20, @01:29PM
And multiple tables is where SQL becomes something of a black art. I've written perfectly sound queries that involve just inner joins of just 3 tables, and found them taking unacceptably long times to execute. Had to rewrite them to join 2 tables at a time, dumping the results of the 1st join into a temporary table, then joining the last table to the temporary. When queried that way, the database returned results immediately. Maybe the database should've been smarter?
(Score: 2) by DannyB on Monday March 20, @02:59PM (2 children)
Can Chat GPT write code at least as well as people who cheat on tests, or people who spend more time looking for magical shortcuts to learn to write code in 24 hours or less, or people who think they can scrape by copying and pasting code from the intarwebs.
I think Chat GPT will not do any better than the people described in the above run on sentence. Not until Chat GPT can deeply think through problems. Reason out novel solutions.
I'm not saying it will never happen. I'm just saying we're not there yet.
So far Chat GPT does well on easy problems. Like how to find a set of prime numbers within a certain range. However examples of how this is done are very well known.
How often should I have my memory checked? I used to know but...
(Score: 2, Disagree) by Common Joe on Monday March 20, @05:34PM (1 child)
Yeah. There's a reason for this. It's because it's trained on datasets that include this information. Modern AIs can only regurgitate what they were fed. That's why it does well on easy problems. It's also why an AI as sophisticated as ChatGPT needs a second AI to censor the racist / hatred BS (which it learned from the Internet) that it would otherwise spew.
(Score: 2) by Common Joe on Tuesday March 21, @03:54AM
I don't know who disagreed with me, but I assume it's because I said two AIs are needed for filtering racism? Here's my source: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic [time.com]. It states in the fourth paragraph:
Additionally, ChatGPT can't multiply. Here is what I gave ChatGPT:
The actual answer is 1537616.083766
So, yes, ChatGPT (and other AIs) really do regurgitate what they are fed.
(Score: 2) by SomeGuy on Monday March 20, @06:34PM
Um, "natural language" querying of databases, spreadsheets, and other data sources, has been attempted since at least the 1980s. Of course, each attempt at it was just an abstraction around an existing abstraction, and once you try and do real work then you have to either learn all the tiny nuances in a particular implementation -just like a programming language-, or you find out it won't do what you need.
Good fucking luck getting a fucking chat program to understand the actual business requirements and specifications around any particular data field. I've had to dig through old file cabinets interpreting business documents, interrogate the few users that were around before the system was put in place, and reverse engineer specifications based on gibberish code and polluted data, before I could actually USE that data to produce results that were CORRECT.
Ah, right, but nobody cares about accuracy or any kind of meaning as long as they have a pretty graph to look at or a buzzword they can add on their resume.