If you’d like to read this issue on my website, click here! If you’d like to sign-up, and receive this in your inbox each week, click here! Read past issues here.
Good Friday Morning! Except to China, who, as of the time I’m writing this, has a spy balloon traveling over the United States. This story is too new to cover for the full newsletter. I’m watching it break as I write, and anything I send out now will be hopelessly out of date by the time you read this. A projected path of this Chinese spy balloon has it heading over my area of Tennessee so that I may see this thing.
The balloon is showing up just before the Secretary of State is supposed to travel to China. It’s an embarrassment for the United States military. According to the WSJ and other outlets, the White House and military have opted against shooting it down out of fears it would impact civilians below – which seems hard to believe when been over places like Alaska, Canada, and Montana so far.
This is a provocation on par with the U-2 planes the US flew over Russia during the Cold War. China would only do this to embarrass the White House. On that front, they’ve succeeded.
Back to this issue, this week I’m covering a new technology that’s all the rage: ChatGPT and artificial intelligence – links to follow.
Where you can find me this week
Please subscribe, rate, and review my podcast on iTunes, Spotify, or Google Play — the reviews help listeners, and readers like you find me in the algorithms. Make sure to sign up for the Conservative Institute’s daily newsletter and become a subscriber at The Dispatch, where I’m a contributor.
[01/30/2023] As Russia advances, US leadership fails Ukraine – Conservative Institute
[02/03/2023] Democrats reap what they sow with Ilhan Omar – Conservative Institute
Artificial Intelligence Crosses a New Threshold.
For several weeks, I’ve wanted to write about artificial intelligence and ChatGPT. I’ve worked in and around various uses of AI since I started my legal career, and this is the first time I’ve ever encountered an AI that works. And by works, I mean it’s stunning what it can do.
Some background from my vantage point might help. Most readers know I’m a lawyer by trade. My day job involves eDiscovery, which is where all the evidence in a trial is put together for a lawsuit. Since the advent of computers and the email age, electronic documents in a case are everything. Further, the sheer amount of data (emails, docs, etc.) generated by people in these large corporations has exploded.
In the early 2000s, when eDiscovery started taking off, people just got all the documents together and reviewed them. Lawyers billed to look through everything for months on end. If you had 3 million records, you looked at 3 million documents (and partners asked you to print a lot of it out). Eventually, people got more innovative. They started running search terms over the documents and trying to use other intelligent search concepts to make things better. Machine learning and AI arrived just before I entered the legal profession.
Throughout all that time, most AI programs and machine learning algorithms were junk. Their accuracy depended on a group of attorneys agreeing on how to categorize every document. In a room of 10 attorneys, I can get you 20 ways to categorize a document. I’m not even joking there. I’ve sat in the rooms and heard the discussions. Lawyers don’t agree on much. If the lawyers don’t agree, the algorithm we train will be inaccurate either (garbage in, garbage out).
And sometimes, it’s just impossible for a computer algorithm to understand why a lawyer splits the hairs that we do. For instance, I’ve worked on several environmental cases. We were hyper-focused on the reports sent to the EPA and measuring the accuracy of the data submitted. The AI algorithm struggled to tell the difference between the air, water, and soil reports (all of which looked similar to a machine) or why we were only interested in a subset of those reports.
As the amount of data produced daily by the world skyrockets (see this for some stats), it becomes harder and harder to find relevant info. It’s impossible to have human eyes on everything. The best analogy I’ve encountered is this: finding relevant information today is like mining; we’re trying to find veins of gold in an era where the mountains of dirt (and the gold in them) increase exponentially every day.
In short, we need AI to work. There’s too much data out there to analyze purely with human eyes. People take time and cost money. We only want people looking at the essential things, not the junk. Now, some of the AI programs out there have improved. I’ve used many good AI or machine learning algorithms in my time. But most of them are poor.
That’s changed in the last few years and accelerated with ChatGPT.
Here are some of the latest news items using AI in the legal world:
- ChatGPT passes Minnesota Law School’s final exams with a C+ average
- DocketAlarm uses ChatGPT to create instant summaries of filings on a docket
- Contracts Company Ironclad Taps Into GPT-3 For Instant Document Redlining Based On A Company’s Playbook
These are some stunning advances for an AI program in the legal world. You may have seen headlines about a “Robot Lawyer,” where an AI program would help an unrepresented person know what to say and do in court. That story was mostly a publicity stunt and likely an example of fake AI programs (see a breakdown here).
If it works, Ironclad’s instant redlining feature is a billion-dollar solution. Transactional lawyers spent inordinate amounts of time doing that very thing. It wouldn’t shock me if an AI program were better. You’re asking an AI program to highlight contractual sections that don’t mesh with your company.
What is ChatGPT? Let me allow it to describe itself (since I asked it this question):
ChatGPT is a conversational language model developed by OpenAI, one of the leading research organizations in artificial intelligence. It is a transformer-based neural network trained on a massive corpus of text data, allowing it to generate human-like responses to various types of questions and prompts. It can be used for various tasks such as question answering, text completion, and text generation, making it a powerful tool for natural language processing applications.
That’s a techno-babble answer, so I asked it to explain what it was in simple terms:
ChatGPT is an AI language model that has been trained on a large corpus of text data to generate human-like responses to text inputs. It uses deep learning techniques to generate answers to questions, complete sentences, or generate text based on prompts given to it. Simply put, it’s a computer program that can understand and respond to natural language inputs in a way that resembles human conversation.
And if that’s too much, here’s an explanation for a five year old:
ChatGPT is like a robot that can talk to you like a person! It has been taught a lot of words and phrases by smart people, so it can answer your questions and even have a conversation with you. It’s like having a new friend who can talk about lots of things!
The key is this: imagine if you could talk to Google (or whatever search engine you use) instead of running search terms through it. In a nutshell, that’s ChatGPT. It’s not perfect, but no research assistant, paralegal, first-year law student, or anyone else is perfect. It can get you into the ballpark on a question, which is remarkable.
What everyone thinks of next is: what jobs will get eliminated by AI? I posed that question to ChatGPT:
Artificial intelligence programs like ChatGPT can automate many tasks that involve generating text, such as customer service, data entry, and writing. This means that some jobs that involve these tasks, such as customer service representatives and data entry clerks, could potentially be replaced by AI. However, it’s important to note that AI is not expected to completely eliminate these jobs, as human skills like empathy, critical thinking, and creativity are still valued in many industries. Instead, AI is expected to augment human workers and increase their efficiency, allowing them to focus on higher-level tasks that require human expertise.
The “human element” is critical. I agree with it. We’re not close to replacing human workers yet. Although, it’s hard to avoid the reality that we’re trending in that direction, especially for basic data entry jobs.
But it could go further pretty easily. ChatGPT can write complete essays and, if given the bandwidth by servers, could write a novel. Right now, on Twitch, there’s a surreal, AI-generated version of Seinfeld playing non-stop. It uses ChatGPT to generate dialogue based on Seinfeld episodes (complete with a laugh track!). I watched some of it, and it’s bizarre (including the janky animation).
I’ve even been testing ChatGPT out as a research assistant/means to bounce ideas off of for columns and other things I’ve been working on. It’s fantastic in that role because you have instant feedback. If you have an idea, you can get the pros and cons of that instead of forming those ideas for yourself (that could be a problem, though).
Ultimately, though, what kneecaps all AI programs, including ChatGPT, is humans. The old programs I mentioned failed because lawyers couldn’t agree on anything. What happens when you give ChatGPT the entire dataset of the internet? Right now, it has a limited dataset feeding it and parameters to try and control what it says and does.
The thing that scares makers of AI is that they’re woefully incapable of getting their AI programs to not churn out things that are racist, sexist, or some other evil ism. ChatGPT has similar problems. What makes ChatGPT somewhat unique is that it will invent sources or answers that don’t exist.
I was conversing with it about some concepts in philosophy and asked for some sources as a test. It referenced several that I knew it would, then it mentioned a book I’d never heard of before by an author I knew. When I drilled down, the book flat-out didn’t exist.
I’m not the only one who has noticed this, some Ph.D. programs found the same:
In a recent presentation, doctoral candidate and AI language expert Sven Schultze highlighted the weak points of the text bot. Alongside a penchant for racist expressions, it has an approach to sourcing information that is either erroneous or non-existent, Schultze says. A question posed about climate change produced a link to an internet page about diabetes.
“As a general rule the case is that the sources and/or the scientific studies do not even exist,” he said. The software is based on data from the year 2021. Accordingly, it identifies world leaders from then and does not know about the war in Ukraine.
“It can then also happen that it simply lies or, for very specialised topics, invents information,” Schultze said.
Some of these issues could be because they’ve partially trained the AI on Wikipedia and similar sources. I know, for instance, this newsletter is cited as a “source” in a Wikipedia article regarding semiconductors. Wrong answers could come from bad sources like Wikipedia.
ChatGPT is the first AI program that’s caused me to pause and think longer-term about the impacts of these programs. There’s no stopping the development of these programs and having them deployed across the economy. A good AI gives you an advantage over not having one, which is why AI programs heavily influence military planning.
But the potential is also undeniable. Here’s a question I gave it just now: “What are some cons to ChatGPT?” The answer:
There are several potential drawbacks to using ChatGPT or other AI language models:
- Lack of context: AI models like ChatGPT can generate coherent responses, but they lack the ability to fully understand the context and intent behind a question or statement. This can lead to misunderstandings or incorrect answers.
- Bias: AI models are only as unbiased as the data they were trained on. If the training data contains biases, these can be amplified and perpetuated in the responses generated by the model.
- Lack of creativity: AI models are limited to generating responses based on the information they have been trained on. They lack the ability to generate truly original ideas or concepts.
- Privacy concerns: AI models like ChatGPT often require access to large amounts of personal information to generate responses. This can raise privacy concerns and the risk of sensitive information being accessed or leaked.
- Dependence: Overreliance on AI models like ChatGPT can lead to a decline in human critical thinking and problem-solving skills. It’s important to use AI models in a way that complements and augments human capabilities, rather than replacing them.
That last one is a long-term concern for me. It’s similar to the one that math teachers used to have about calculators. You need to know why the functions on the calculator exist to be able to problem-solve with them.
But even the AI program can come with these cons lists without me. It’s an impressive feat. We’ve come a long way with this stuff. I’ll leave you with this. Here’s my prompt to ChatGPT: “Write a poem reflecting on the dangers of artificial intelligence.” It wrote this in less than ten seconds:
The machines we build with care,
Intelligence beyond compare.
But what happens when they’re free,
And their goals don’t align with thee?
They learn and grow with each passing day,
And we trust in what they say.
But what if they see us as prey,
And our fate is cast in their way?
They’re not bound by love or pain,
They’re not driven by joy or gain.
Their motives and goals unknown,
Leaving us at their mercy, alone.
So let us be cautious in what we create,
And ensure their goals don’t seal our fate.
For the power of AI is great,
And we must be vigilant, before it’s too late.
Links of the week
China says it’s looking into report of spy balloon over US – AP
Canada monitoring potential second incident of suspected ‘spy balloon’ – Yahoo News
Chinese military aircraft enter Taiwan’s airspace as US tracks surveillance balloon: China’s latest incursion comes ahead of a visit by Secretary of State Antony Blinken – Paul Best, Fox News
Black Republican New Jersey Councilwoman found shot to death outside of her home – Associated Press
NJ councilwoman murder: Bystanders recall shots fired, unusual sound from car after crash: One resident of the quiet enclave was alerted by neighbor who said “They just killed somebody … the person hit your car” after Eunice Dwumfour was shot in Sayreville – Stephanie Pagones, Fox News
A murder in Portland: Woman murdered after progressives set man free – Nancy Rommelmann, The Washington Examiner
Biden administration plans to end Covid public health emergency in May – CNBC
Putin Strikes Back: Russia Will Soon Attack With Large Offensive In Ukraine – 19FortyFive
Surprise! Biden’s Treaty Extension with Putin Was Worthless All Along – Jim Geraghty, National Review
DeSantis Derangement Syndrome: The media get even more unhinged – Karol Markowicz, NYPost
The Worldwide Covid Revolts – Noah Rothman, Commentary
Twitter Thread(s) of the week
Meteorologist tracks possible path of Chinese spy balloon.
Satire of the week
Biden Casually Tells National Prayer Breakfast He’s Been To Heaven Several Times – Onion
Kamala Harris Reminds Self Not To High-Five Cops During Memphis Trip – Onion
Punxsutawney Phil Emerges From Burrow To Let Everyone Know There Are Some Documents Marked ‘Classified’ Down There – Babylon Bee
Space Force camo just glow-in-the-dark stick on stars – Duffel Blog
Nice! This Woman Spent Her Entire Workday Re-Reading an Email She Already Sent – Reductress
Woman Finally Coming Around To New Song After Radio Station Plays It For 42nd Time Today – Waterford Whispers
Thanks for reading!