Rocksolid Light

Welcome to RetroBBS

mail  files  register  newsreader  groups  login

Message-ID:  

"Help Mr. Wizard!" -- Tennessee Tuxedo


computers / comp.risks / Risks Digest 34.14

SubjectAuthor
o Risks Digest 34.14RISKS List Owner

1
Risks Digest 34.14

<CMM.0.90.4.1712456484.risko@chiron.csl.sri.com1315>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=35&group=comp.risks#35

  copy link   Newsgroups: comp.risks
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!panix!.POSTED.panix1.panix.com!not-for-mail
From: risko@csl.sri.com (RISKS List Owner)
Newsgroups: comp.risks
Subject: Risks Digest 34.14
Date: 7 Apr 2024 02:22:57 -0000
Organization: PANIX Public Access Internet and UNIX, NYC
Lines: 396
Sender: RISKS List Owner <risko@csl.sri.com>
Approved: risks@csl.sri.com
Message-ID: <CMM.0.90.4.1712456484.risko@chiron.csl.sri.com1315>
Injection-Info: reader1.panix.com; posting-host="panix1.panix.com:166.84.1.1";
logging-data="2124"; mail-complaints-to="abuse@panix.com"
To: risko@csl.sri.com
 by: RISKS List Owner - Sun, 7 Apr 2024 02:22 UTC

RISKS-LIST: Risks-Forum Digest Saturday 6 April 2024 Volume 34 : Issue 14

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, founder and still moderator

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
<http://catless.ncl.ac.uk/Risks/34.14>
The current issue can also be found at
<http://www.csl.sri.com/users/risko/risks.txt>

Contents:
Eclipse tourists should plan for overloaded cell networks (PGN)
AI Researcher Takes on Election Deepfakes (NYTimes)
ETH Zurich student requirement for Windows 11/MacOS, "safe browser"
(Thomas Koenig)
Assisted living managers say an algorithm prevented hiring enough
(WashPost)
Many-shot jailbreaking (Anthropic)
Google fixes two Pixel zero-day flaws exploited by forensics firms
(BleepingComputer)
GPS shut down in parts of Israel (Jim Geissman)
House, Senate leaders nearing deal on landmark online privacy bill
(WashPost)
For Data-Guzzling AI Companies, the Internet Is Too Small (WSJ)
Re: When AI Meets Toast (Steve Bacher
Re: AI that targets civilians ... (Amos Shapir)
Re: Your boss could forward a mail message to you that shows you text he
won't see, but you will (Geoff Kuenning)
Re: The FTC is trying to help victims of impersonation scams get
their money back (Steve Bacher)
Re: Browsing in Google Chrome's incognito mode doesn't protect you
as much as you might think (Steve Bacher)
Re: Elon Musk's Starlink Terminals Are Falling Into the Wrong Hands?
(Amos Shapir)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Sat, 6 Apr 2024 19:34:59 -0400
From: Monty Solomon <monty@roscom.com>
Subject: Eclipse tourists should plan for overloaded cell networks
(WashPost)

A surge of eclipse visitors could bog down local cell service. Here's how to
deal, including by downloading maps and movies ahead of time.

https://www.washingtonpost.com/technology/2024/04/02/cell-service-poor-solar-eclipse/

[U.S. Monday 8 Apr afternoon: Max totality roughly 3 minutes in Waco TX
1:49 CDT, Cleveland 3:15 EDT, Rochester NY 3:20 EDT, Burlington VT 3:17
EDT. (Times approximate.) OTHER RISKS? BEWARE of eye damage, bogus
eclipse glasses (already a hot item) and cellphone polarizers, insane
crowds, pickpockets, blinded drunken drivers, traffic jams afterward,
unguarded railroad crossings, being knocked over by freaked-out animals,
frustrated viewers who spent big bucks and wind up in bad weather (e.g.,
clouds in central TX), end-of-the world protesters, good time for alien
invasion, Governor Huckabee Sanders' knee-jerk preparations, solar-power
vacillations, emerging werewolves in the dark? What else could possibly
go wrong? PGN]

------------------------------

Date: Fri, 5 Apr 2024 11:39:53 -0400 (EDT)
From: ACM TechNews <technews-editor@acm.org>
Subject: AI Researcher Takes on Election Deepfakes (NYTimes)

Cade Metz and Tiffany Hsu, *The New York Times* 2 Apr 2024

TrueMedia.org, founded by Oren Etzioni (pictured), founding chief
executive of the Allen Institute for AI, has rolled out free tools
that journalists, fact-checkers, and others can use to detect
AI-generated deepfakes. Etzioni said the tools will help detect "a
tsunami of misinformation" that is expected during an election
year. However, he added that the tools are not perfect, noting, "We
are trying to give people the best technical assessment of what is in
front of them. They still need to decide if it is real."

------------------------------

Date: Thu, 4 Apr 2024 19:53:37 +0200
From: Thomas Koenig <tkoenig@netcologne.de>
Subject: ETH Zurich student requirement for Windows 11/MacOS, "safe browser"

ETH Zurich requires all students starting this fall or later to have a
laptop with Windows 11 or a recent version of MacOS so they can install what
is euphemistically called "Safe Exam Browser" for examinations.

What do you call a software which locks out the user and prevents him from
doing things on his own computer? The usual term is "malware", I believe.
Requiring students to install such malware on their own computers is not so
great.

There is also claim that the Safe Exam Browser cannot be run in a virtual
machine. As students are notoriously inventive, it will be interesting to
see how long that claim will stand the test of reality...

https://ethz.ch/en/studies/bachelor/beginning-your-studies/BYOD.html

------------------------------

Date: Thu, 04 Apr 2024 21:14:26 +0000
From: Richard Marlon Stein <rmstein@protonmail.com>
Subject: Assisted living managers say an algorithm prevented hiring enough
staff (The Washington Post)

https://www.washingtonpost.com/business/2024/04/01/assisted-living-algorithm-staffing-lawsuits-brookdale/

An algorithm optimizes senior-care labor scheduling (aka opex). Profit
extraction wins, seniors (and their families) get [shorted.

------------------------------

Date: Thu, 4 Apr 2024 14:47:46 -0400
From: Monty Solomon <monty@roscom.com>
Subject: Many-shot jailbreaking

We investigated a jailbreaking technique -- a method that can be used to
evade the safety guardrails put in place by the developers of large language
models (LLMs). The technique, which we call many-shot jailbreaking, is
effective on Anthropic's own models, as well as those produced by other AI
companies. We briefed other AI developers about this vulnerability in
advance, and have implemented mitigations on our systems.

The technique takes advantage of a feature of LLMs that has grown dramatically in the last year: the context window. At the start of 2023, the context window=E2=80=94the amount of information that an LLM can process as its input=E2=80=94was around the size of a long essay (~4,000 tokens). Some models now have context windows that are hundreds of times larger =E2=80=94 the size of several long novels (1,000,000 tokens or more).

The ability to input increasingly-large amounts of information has obvious advantages for LLM users, but it also comes with risks: vulnerabilities to jailbreaks that exploit the longer context window.

One of these, which we describe in our new paper, is many-shot
jailbreaking. By including large amounts of text in a specific
configuration, this technique can force LLMs to produce potentially harmful
responses, despite their being trained not to do so.

Below, we'll describe the results from our research on this jailbreaking
technique -- as well as our attempts to prevent it. The jailbreak is
disarmingly simple, yet scales surprisingly well to longer context
windows. [...]

https://www.anthropic.com/research/many-shot-jailbreaking

Paper
https://www-cdn.anthropic.com/af5633c94ed2beb282f6a53c595eb437e8e7b630/Many_Shot_Jailbreaking__2024_04_02_0936.pdf

------------------------------

Date: Fri, 5 Apr 2024 10:32:52 -0400
From: Monty Solomon <monty@roscom.com>
Subject: Google fixes two Pixel zero-day flaws exploited by forensics
firms (BleepingComputer)

https://www.bleepingcomputer.com/news/security/google-fixes-two-pixel-zero-day-flaws-exploited-by-forensics-firms/

------------------------------

Date: Thu, 4 Apr 2024 19:06:07 -0700
From: "Jim" <jgeissman@socal.rr.com>
Subject: GPS shut down in parts of Israel

Looks like GPS in parts of Israel is out to interfere with a possible
Iranian counterattack. One wonders what critical services are disrupted by
this. One risk of relying on advanced systems while in a country at war.

------------------------------

Date: Fri, 5 Apr 2024 21:38:56 -0400
From: Monty Solomon <monty@roscom.com>
Subject: House, Senate leaders nearing deal on landmark online privacy
bill (WashPost)

The leaders of two key congressional committees are close to an agreement on
a national framework to protect Americans' personal data online.

https://www.washingtonpost.com/technology/2024/04/05/federal-privacy-interne=
t-congress/

------------------------------

Date: Fri, 5 Apr 2024 11:39:53 -0400 (EDT)
From: ACM TechNews <technews-editor@acm.org>
Subject: For Data-Guzzling AI Companies, the Internet Is Too Small (WSJ)

Deepa Seetharaman, *The Wall Street Journal*, 1 Apr 2024

Companies working on powerful AI systems are encountering a lack of
quality public data online, especially as some data owners block
access to their data. One possible solution to the data shortage is
the use of synthetic training data, though this has raised concerns
about the potential for severe malfunctions. DatologyAI is
experimenting with curriculum learning, which feeds data to language
models in a certain order to improve the quality of connections
between concepts.

[Truth in Advertising through synthetic training data? They must be
kidding? PGN]

------------------------------

Date: Fri, 5 Apr 2024 16:22:42 -0700
From: Steve Bacher <sebmb1@verizon.net>
Subject: Re: When AI Meets Toast

Some of us remember this gem from the 1990s.  It seemed absurd at the time,
but not so much now, eh?

The object oriented toaster


Click here to read the complete article

computers / comp.risks / Risks Digest 34.14

1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor