Rocksolid Light

Welcome to RetroBBS

mail  files  register  newsreader  groups  login

Message-ID:  

The best way to accelerate a Macintoy is at 9.8 meters per second per second.


computers / comp.risks / Risks Digest 33.90

SubjectAuthor
o Risks Digest 33.90RISKS List Owner

1
Risks Digest 33.90

<CMM.0.90.4.1697768907.risko@chiron.csl.sri.com28376>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=19&group=comp.risks#19

  copy link   Newsgroups: comp.risks
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!panix!.POSTED.panix3.panix.com!not-for-mail
From: risko@csl.sri.com (RISKS List Owner)
Newsgroups: comp.risks
Subject: Risks Digest 33.90
Date: 20 Oct 2023 02:30:24 -0000
Organization: PANIX Public Access Internet and UNIX, NYC
Lines: 614
Sender: RISKS List Owner <risko@csl.sri.com>
Approved: risks@csl.sri.com
Message-ID: <CMM.0.90.4.1697768907.risko@chiron.csl.sri.com28376>
Injection-Info: reader2.panix.com; posting-host="panix3.panix.com:166.84.1.3";
logging-data="9617"; mail-complaints-to="abuse@panix.com"
To: risko@csl.sri.com
 by: RISKS List Owner - Fri, 20 Oct 2023 02:30 UTC

RISKS-LIST: Risks-Forum Digest Thursday 19 October 2023 Volume 33 : Issue 90

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
<http://catless.ncl.ac.uk/Risks/33.90>
The current issue can also be found at
<http://www.csl.sri.com/users/risko/risks.txt>

Contents:
How ChatGPT and other AI tools could disrupt scientific publishing
(Nature)
`Algorithmic destruction' and the deep algorithmic problems of AI
and copyright (San Francisco Chronicle)
A Chatbot Encouraged Him to Kill the Queen. It's Just the Beginning (WiReD)
Dilemma of the Artificial Intelligence Regulatory Landscape
(CACM Vol 66 No 9)
Experts Worry as Facial Recognition Comes to Airports and
Deepfake Election Interference in Slovakia (Bruce Schneier)
A big win in our fight to reclaim the Internet! (Mozilla)
Win $12k by rediscovering the secret phrases that secure the Internet
(New Scientist)
Your old phone is safe for longer than you think (WashPost)
How do you get out of a $28,000 timeshare mistake? (Eliott)
The TSA wants to put a government tracking app on your smartphone
(PapersPlease)
New York Bill Would Require a Criminal Background Check to Buy a 3D Printer
(Gizmodo)
Burned-out parents seek help from a new ally: ChatGPT (geoff goodfellow)
Allied Spy Chiefs Warn of Chinese Espionage Targeting Tech Firms (NYTimes)
Top crypto firms named in $1bn fraud lawsuit (BBC)
The secret life of Jimmy Zhong, who stole and lost more than $3B (CNBC)
Why do people fall for grief scams? (Rob Slade)
Remote Driving Is a Sneaky Shortcut to the Robotaxi (WiReD)
Re: Autonomous Vehicles Are Driving Blind (Chris Volpe)
Re: False news spreads faster than the truth (Amos Shapir)
Re: Vermont Utility Plans to End Outages by Giving Customers
Batteries (John Levine)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Sat, 14 Oct 2023 09:52:05 -0700
From: Steve Bacher <sebmb1@verizon.net>
Subject: How ChatGPT and other AI tools could disrupt scientific publishing
(Nature)

https://www.nature.com/articles/d41586-023-03144-w

When radiologist Domenico Mastrodicasa finds himself stuck while writing a
research paper, he turns to ChatGPT, the chatbot that produces fluent
responses to almost any query in seconds. “I use it as a sounding board,”
says Mastrodicasa, who is based at the University of Washington School of
Medicine in Seattle. “I can produce a publication-ready manuscript much
faster.”

[I am reminded of Fred Brooks and Bill Wright doing a Markov chain
analysis of 37 common-meter hymn tunes, and Al Hopkins and I a year later
in Tony Oettinger's statistical linguistics seminar (1954-1955) randomly
generating more than 600 hymn tunes consistent with various chain lengths
in weekend runs on the Harvard Mark IV. This is documented in our paper,
An Experiment in Musical Composition, IRE Transactions on Electronic
Computers EC-6, 175-182, September 1957. You can find it on the Web.

Harvard Poet David McCord wrote these common-meter lyrics, which he read
at the introduction of a Univac 1 (taking literary liberty with the
identity of the computer):

O God, our help in ages past,
Thy help we now eschew.
Hymn tunes on Univac at last,
Dear God, for Thee, for You.
We turn them out almighty fast,

Ten books to every pew.

Our HymnBot almost 70 years ago was obviously a very primitive
small-language precursor of the current ChatBot rage. PGN]

------------------------------

Date: September 24, 2023 10:06:15 JST
From: Ellen Ullman <ullman@well.com>
Subject: `Algorithmic destruction' and the deep algorithmic problems of AI
and copyright (San Francisco Chronicle)

[From Dave Farber's IP distribution)

Could ‘algorithmic destruction’ solve AI’s copyright issues?
https://www.sfchronicle.com/tech/article/ai-artificial-intelligence-copyright-18374295.php

By Chase DiFeliciantonio, 23 Sep 2023

Comments

OpenAI’s ChatGPT is trained by “consuming” vast amounts of information
online. Some authors have sued OpenAI alleging the company unfairly used
their copyrighted works to teach its chatbots how to respond to written
prompts. One way to fix that could be to employ “algorithmic destruction.”

OpenAI’s ChatGPT is trained by “consuming” vast amounts of information
online. Some authors have sued OpenAI alleging the company unfairly used
their copyrighted works to teach its chatbots how to respond to written
prompts. One way to fix that could be to employ “algorithmic destruction.”
Richard Drew/Associated Press

If artificial intelligence mimics our brains, does that mean it too can
unlearn something it knows?

That question is central to lawsuits filed by a range of creatives who say
their copyrighted work was infringed by OpenAI and Meta. But making an AI
“forget” isn’t the same as removing the blocky chips from HAL 9000’s digital
brain in “2001: A Space Odyssey.” In fact, the lawsuits raise the question:
is “unlearning” even possible for an AI? And, if not, are there other ways
to ensure generative AI programs don’t draw from copyrighted material, short
of tearing them down?

Enter “algorithmic destruction,” a term that entails trashing an AI model
that may have taken years and millions of dollars to train, then rebuilding
it from scratch by inputting only fair-use text, images and data.

That would be “the most extreme remedy” to issues highlighted in lawsuits
like those filed against OpenAI and Meta, said Pamela Samuelson, a UC
Berkeley professor and expert in generative AI and copyright law.

But, she said, it’s not unthinkable.

Here’s how it might work:

Since AI models aren't some baby powder that can be easily recalled and
remade after slapping a company with a fine, there are basically three
approaches given the current way the technology works, plus one more path
that would change how it “thinks,” said UC Berkeley professor and computer
scientist Matei Zaharia:

Destroy the model.
“Screen” results that include copyrighted material.
Retrain the model.

A fourth route would be to invent models that work more like a super-smart
web search, and which can cite sources unlike chatbots such as GPT-3 which,
similar to a human brain, doesn't always know where it learned something, or
whether it’s totally accurate.

That way programmers could, in theory, pull documents from a model’s
training set — like nodes of the HAL 9000’s processor — so the program could
no longer reference them when asked a question, said Zaharia, who is working
on that kind of approach.

With the way the dominant generative AI technology works for now, though,
“it’s hard to make models forget specific content,” said Zaharia, who is
also the co-founder and CTO of San Francisco’s Databricks.

The easiest way to keep a program from spitting out information it shouldn’t
“would probably be after your model generates something, but before you send
it back to the user, you check, ‘Is this really close to something’ ” like a
copyrighted work, Zaharia said.

Telling the program to skate around copyrighted material, specifically,
would probably not work since, like a toddler with superpowers, “it doesn't
really know” what is copyrighted and what isn’t, Zaharia said.

------------------------------

Date: Thu, 19 Oct 2023 01:04:01 -0400
From: Gabe Goldberg <gabe@gabegold.com>
Subject: A Chatbot Encouraged Him to Kill the Queen. It's Just the
Beginning (WiReD)

Companies are designing AI to appear increasingly human. That can mislead
users—or worse.

Humans are prone to see two dots and a line and think they’re a face. When
they do it to chatbots, it’s known as the Eliza effect. The name comes from
the first chatbot, Eliza, developed by MIT scientist Joseph Weizenbaum in
1966. Weizenbaum noticed users were ascribing erroneous insights to a text
generator simulating a therapist. [...]

Mental health chatbots may carry similar risks. Jodi Halpern, a professor of
bioethics at UC Berkeley, whose work has challenged the idea of using AI
chatbots to help meet the rising demand for mental health care, has become
increasingly concerned by a marketing push to sell these apps as caring
companions. She's worried that patients are being encouraged to develop
dependent relationships—of “trust, intimacy, and vulnerability”—with an
app. This is a form of manipulation, Halpern says. And should the app fail
the user, there is often no mental health professional ready to come to
their aid. Artificial intelligence cannot stand in for human empathy, she
says.

https://www.wired.com/story/chatbot-kill-the-queen-eliza-effect

------------------------------

Date: Sun, 15 Oct 2023 16:49:04 -0400
From: Cliff Kilby <cliffjkilby@gmail.com>
Subject: Dilemma of the Artificial Intelligence Regulatory Landscape
(CACM Vol 66 No 9)

In the opinion piece "Dilemma of the Artificial Intelligence Regulatory
Landscape": Wu and Liu note that with rapid expansion of LLMs and progress
towards true AI regulatory frameworks are woefully unprepared. The authors
are of the opinion that implementation of new features should be accelerated
regardless of the regulatory gaps stating ``We have found the key to
settling concerns is to clearly convey the message that potential benefits
outweigh relevant risks.'' This is an extremely troubling approach that led
to terrible things like the ozone hole over Antarctica, and one that I would
caution strongly against. Doubly so, considering that LLMs are regularly
demonstrating they provide at least as many risks as benefits, if not more
so. Copyright lawsuits, Intellectual Property disputes and even a case of
Libel have all appeared in relation to LLMs. In summation, I disagree.


Click here to read the complete article

computers / comp.risks / Risks Digest 33.90

1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor