Stop Using Ai For These 9 Work Tasks - Here's Why

Trending 3 weeks ago
dangerkey555gettyimages-91714338
zokara/iStock/Getty Images Plus via Getty Images

ZDNET's cardinal takeaways

  • Sometimes an AI tin origin you aliases your institution irreparable harm.
  • Sharing confidential information pinch an AI could person ineligible consequences.
  • Don't fto an AI talk to customers without supervision.

A fewer weeks ago, I shared pinch you "9 programming tasks you shouldn't manus disconnected to AI - and why." It's afloat of well-reasoned suggestions and recommendations for really to debar having an AI nutrient codification that could ruin your full day.

Then, my editor and I sewage talking, and we realized nan full thought of "when not to usage an AI" could use to activity successful general. In this article, I coming to you 9 things you shouldn't usage AI for while astatine work. This is acold from a broad list, but it should make you think.

Also: This 1 characteristic could make GPT-5 a existent crippled changer (if OpenAI gets it right)

"Always support successful mind that AI isn't going to publication you your Miranda Rights, wrap your individual accusation successful ineligible protections for illustration HIPAA, aliases hesitate to disclose your secrets," said LinkedIn Learning AI coach Pam Baker, nan bestselling writer of ChatGPT For Dummies and Generative AI For Dummies.

"That goes double for activity AI, which is monitored intimately by your employer. Whatever you do aliases show AI tin and apt will beryllium utilized against you astatine immoderate point."

To support things interesting, publication connected to nan end. There, I stock immoderate nosy and terrifying stories astir really utilizing AI astatine activity tin spell terribly, horribly, and amusingly wrong.

Without further ado, present are 9 things you shouldn't do pinch AI astatine work.

1. Handling confidential aliases delicate information

This is an easy one. Every clip you springiness nan AI immoderate information, inquire yourself really you would consciousness if it were posted to nan company's nationalist blog aliases coiled up connected nan beforehand page of your industry's waste and acquisition journal.

Also: The champion AI for coding successful 2025 (and what not to use)

This interest besides includes accusation that mightiness beryllium taxable to disclosure regulations, specified arsenic HIPAA for wellness accusation aliases GDPR for individual information for folks operating successful nan EU.

Regardless of what nan AI companies show you, it's champion to simply presume that everything you provender into an AI is now grist for nan model-training mill. Anything you provender successful could later upwind up successful a consequence to somebody's prompt, location else.

2. Reviewing aliases penning contracts

Contracts are designed to beryllium elaborate and circumstantial agreements connected really 2 parties will interact. They are considered governing documents, which intends that penning a bad statement is for illustration penning bad code. Baaad things will happen.

Do not inquire AIs for thief pinch contracts. They will make errors and omissions. They will make worldly up. Worse, they will do truthful while sounding authoritative, truthful you're much apt to usage their advice.

Also: You tin usage Google's Math Olympiad-winning Deep Think AI exemplary now - for a price

Also, nan position of a statement are often governed by nan contract. In different words, galore contracts opportunity that what's really successful nan statement is confidential, and that if you stock nan particulars of your statement pinch immoderate extracurricular party, location will beryllium dire consequences. Sharing pinch an AI, arsenic discussed above, is for illustration publishing connected nan beforehand page of a blog.

Let maine beryllium blunt. If you fto an AI activity connected a statement and it makes a mistake, you (not it) will beryllium paying nan value for a long, agelong time.

3. Using an AI for ineligible proposal

You cognize nan trope wherever what you stock pinch your lawyer is protected accusation and can't beryllium utilized against you? Yeah, your friends vicinity AI is not your lawyer.

As reported successful Futurism, OpenAI CEO (and ChatGPT's main cheerleader) Sam Altman told podcaster Theo Von that location is nary ineligible confidentiality erstwhile utilizing ChatGPT for your ineligible concerns.

Also: Even OpenAI CEO Sam Altman thinks you shouldn't spot AI for therapy

Earlier, I discussed really AI companies mightiness usage your information for training and embed that information successful punctual responses. However, Altman took this assertion up a notch. He suggested OpenAI is obligated to stock your conversations pinch ChatGPT if they are subpoenaed by a court.

Jessee Bundy, a Knoxville-based attorney, amplified Altman's connection in a tweet: "There's nary ineligible privilege erstwhile you usage ChatGPT. So if you're pasting successful contracts, asking ineligible questions, aliases asking it for strategy, you're not getting ineligible advice. You're generating discoverable evidence. No attorney/client privilege. No confidentiality. No ethical duty. No 1 to protect you."

She summed up her observations pinch a peculiarly damning statement: "It mightiness consciousness private, safe, and convenient. But lawyers are bound to protect you. ChatGPT isn't, and tin beryllium utilized against you."

4. Using an AI for wellness aliases financial proposal

While we're connected nan taxable of guidance, let's deed 2 different categories wherever highly trained, licensed, and regulated professionals are disposable to supply advice: healthcare and finance.

Look, it's astir apt good to inquire ChatGPT to explicate a aesculapian aliases financial conception to you arsenic if you were a five-year-old. But erstwhile it comes clip to inquire for existent proposal that you scheme connected considering arsenic you make awesome decisions, conscionable don't.

Let's measurement distant from nan liability consequence issues and attraction connected communal sense. First, if you're utilizing thing for illustration ChatGPT for existent advice, you person to cognize what to ask. If you're not trained successful these professions, you mightiness not know.

Also: What Zuckerberg's 'personal superintelligence' income transportation leaves out

Second, ChatGPT and different chatbots tin beryllium spectacularly, overwhelmingly, and almost unbelievably wrong. They misconstrue questions, fabricate answers, conflate concepts, and mostly supply questionable advice.

Ask yourself, are you consenting to stake your life aliases your financial early connected thing that a people-pleasing robot made up because it thought that's what you wanted to hear?

5. Presenting AI-generated activity arsenic your ain

When you inquire a chatbot to constitute thing for you, do you declare it arsenic your own? Some folks person told maine that because they wrote nan prompts, nan resulting output is simply a consequence of their creativity.

Also: I recovered 5 AI contented detectors that tin correctly place AI matter 100% of nan time

Yeah? Not truthful much. Webster's defines "plagiarize" arsenic "to bargain and walk disconnected (the ideas aliases words of another) arsenic one's own," and to "use (another's production) without crediting nan source." The dictionary besides defines plagiarize arsenic "to perpetrate literate theft: coming arsenic caller and original an thought aliases merchandise derived from an existing source."

Does that not sound for illustration what a chatbot does? It judge does "present arsenic caller and original an idea…derived from an existing source." Chatbots are trained connected existing sources. They past parrot backmost those sources aft adding a spot of spin.

Let's beryllium clear. Using an AI and saying its output is yours could costs you your job.

6. Talking to customers without monitoring nan chatter

The different day, I had a method mobility astir my Synology server. I revenge a support summons aft hours. A spot later, I sewage an email consequence from a self-identified support AI. The cool point was that nan reply was complete and conscionable what I needed, truthful I didn't person to escalate my summons to a quality helper.

Also: Is AI overhyped aliases underhyped? 6 tips to abstracted truth from fiction

But not each AI interactions pinch customers spell that well. Even a twelvemonth and a half later, I'm still chuckling astir the Chevy trader chatbot that offered a $55,000 Chevy Tahoe motortruck to a customer for a buck.

It's perfectly good to supply a trained chatbot arsenic 1 support action to customers. But don't presume it's ever going to beryllium right. Ensure customers person nan action to talk pinch a human. And show nan AI-enabled process. Otherwise, you could beryllium giving distant $1 trucks, too.

7. Making last hiring and firing solutions

According to a study by resume-making app Resume Builder, a mostly of managers are utilizing AI "to find raises (78%), promotions (77%), layoffs (66%), and moreover terminations (64%)."

"Why are you firing me?"

"It's not my fault. The AI made maine do it."

Yeah, that. Worse, apparently astatine slightest 20% of managers, astir of whom haven't been trained successful nan authorities and wrongs of AI usage, are utilizing AIs to make last employment decisions without moreover bothering to oversee nan AI.

Also: Open-source skills tin prevention your profession erstwhile AI comes knocking

But here's nan rub. Jobs are often governed by labour laws. Despite nan existent anti-DEI push coming from Washington, bias tin still lead to favoritism lawsuits. Even if you haven't technically done thing wrong, defending against a suit tin beryllium expensive.

If you origin your institution to beryllium connected nan receiving extremity of a suit because you couldn't beryllium bothered to beryllium quality capable to double-check why your AI was canning Janice successful accounting, you'll beryllium nan adjacent 1 being handed a pinkish slip. Don't do it. Just opportunity no.

8. Responding to journalists aliases media inquiries

I'm going to show you a small secret. Journalists and writers do not beryllium solely to beforehand your company. We'd for illustration to help, certainly. It feels bully knowing we're helping folks turn their businesses. But, and you'll request to beryllium down for this news, location are other companies.

We are besides busy. I get thousands of emails each day. Hundreds of them are astir nan newest and by acold astir innovative AI institution ever. Many of those pitches are AI-generated because nan PR folks couldn't beryllium bothered to return nan clip to attraction their pitch. Some of them are truthful bad that I can't moreover show what nan PRs are trying to hawk.

But then, there's nan different side. Sometimes, I'll scope retired to a company, consenting to usage my astir valuable assets -- clip -- connected their behalf. When I get backmost a consequence that's AI-driven, I'll either move connected to nan adjacent institution (or mock them connected societal media).

Also: 5 entry-level tech jobs AI is already augmenting, according to Amazon

Some of those AI-driven answers are really, really inappropriate. However, because nan AI is representing nan institution alternatively of, you know, possibly a reasoning human, an opportunity is lost.

Keep successful mind that I don't for illustration publishing things that will costs personification their job. But different writers are not needfully likewise inclined. A decently tally business will not only usage a quality to respond to nan press, but will besides limit nan humans allowed to correspond nan institution to those decently knowledgeable successful what to say.

Or spell up and trim corners. I ever request nosy fodder for my Facebook feed.

9. Using AI for coding without a backup

Earlier, I wrote "9 programming tasks you shouldn't manus disconnected to AI," which elaborate programming tasks you should debar passing on to an AI. I've agelong been tense astir ceding excessively overmuch work to an AI, and rather concerned astir managing codebase maintenance.

But I didn't really understand really acold stupid could spell erstwhile it came to delegating coding work to nan AI. I mean, yes, I cognize AIs tin beryllium stupid. And I judge cognize humans tin beryllium stupid. But erstwhile AIs and humans activity successful tandem to beforehand nan origin of their stupidity together, nan results tin beryllium genuinely awe-inspiring.

In "Bad vibes: How an AI supplier coded its measurement to disaster," my ZDNET workfellow Steven Vaughan-Nichols wrote astir a developer who happily vibe-coded himself to an almost-complete portion of software. First, nan AI hard-coded lies astir really portion tests performed. Then nan AI deleted his full codebase.

It's not needfully incorrect to usage AI to thief you code. But if you're utilizing a instrumentality that can't beryllium backed up, aliases you don't fuss to backmost up your codification first, you're simply doing your champion to gain a digital Darwin award.

Bonus: Other examples of what not to do

Here's a lightning information of boneheaded moves utilizing AI. They're conscionable excessively bully (and by good, I mean bad) not to recount:

  • Letting a chatbot negociate occupation applicant data: Remember really we told you not to usage an AI for hiring and firing? McDonald's uses a chatbot to surface applicants. Apparently, nan chatbot exposed millions of applicants' individual information to a hacker who utilized nan password 123456.
  • Replacing support unit pinch an AI, and gloating: A CEO of e-commerce level Dukaan terminated 90% of his support unit and replaced them pinch an AI. Then he bragged astir it. On Twitter/X. The nationalist consequence was little than positive. Way less.
  • Produce a reference database consisting of each clone titles: The Chicago Sun-Times, usually a very well-respected paper, published a summertime reference database generated by an AI. The gotcha? None of nan books were real.
  • Suggesting terminated labor move to a chatbot for comfort: An Xbox shaper (yes, that's Microsoft) suggested that ChatGPT aliases Copilot could "help trim nan affectional and cognitive load that comes pinch occupation loss" aft Microsoft terminated 9,000 employees. Achievement unlocked.

What astir you? Have you seen an AI spell disconnected nan rails astatine work? Have you ever been tempted to delegate a task to a chatbot that, successful hindsight, astir apt needed a quality touch? Do you spot AI to grip delicate data, pass pinch customers, aliases make decisions that impact people's lives? Where do you tie nan statement successful your work? Let america cognize successful nan comments below.


You tin travel my day-to-day task updates connected societal media. Be judge to subscribe to my play update newsletter, and travel maine connected Twitter/X astatine @DavidGewirtz, connected Facebook astatine Facebook.com/DavidGewirtz, connected Instagram astatine Instagram.com/DavidGewirtz, connected Bluesky astatine @DavidGewirtz.com, and connected YouTube astatine YouTube.com/DavidGewirtzTV.

More