Oh hey. People are using 'ai' tools to summarize emails.
This means that they are introducing the risk of hallucination into email threads, meaning there is a very real risk that the summary may imply the acceptance or rejection of some action item counter to the actual case.
wow that's gonna fuck up a lot of people's workflows.
I am Jack's Lost 404 reshared this.
Fi, infosec-aspected
•John-Mark Gurney reshared this.
Fi, infosec-aspected
•Fi, infosec-aspected
•LLMs are suited for -augmenting existing skills-.
They are -not- suited for "using skills you do not yourself have expertise in using" because, without that expertise, you cannot tell when something is hallucinated bullshit and when it's strange-but-reasonable.
reshared this
Richard "mtfnpy" Harman, Tindra and John-Mark Gurney reshared this.
Fi, infosec-aspected
•Tindra reshared this.
Miredly
•Fi, infosec-aspected
•@Miredly
Because marketing lies to people about the capability of the product, and obscures how it actually functions, which requires broad systemic knowledge of tech in order to comprehend.
Random Geek
•Fi, infosec-aspected
•@randomgeek
Marketing claims require auditing same's anything else in an org, and I think that these constitute a clear risk surface for an organization's efficacy.
Random Geek
•Fi, infosec-aspected
•@randomgeek
the whole situation is fucking exhausting and I really wish I could be in a position where I did not have to be aware of this horseshit.
Lord Doktor Krypt3ia
•Fi, infosec-aspected
•@krypt3ia @randomgeek
at least the shit those guys handed out would get you high.
Lord Doktor Krypt3ia
•Fi, infosec-aspected
•@krypt3ia @randomgeek
so, equal downsides but less enjoyable all around.
Lord Doktor Krypt3ia
•XenoPhage :verified:
•Fi, infosec-aspected
•@XenoPhage @krypt3ia @randomgeek
that is not dead which can eternal lie
and with strange aeons, even death may die
Scott Francis
•Fi, infosec-aspected
•Tindra reshared this.
Scott Francis
•Fi, infosec-aspected
•@darkuncle
I don't do the marvel shit so other than the name I don't have any connection to this metaphor.
Scott Francis
•Iron Man is a guy inside a suit of powered intelligent armor - it's the armor that makes him a superhero (that and his genius-level intelligence and enormous wealth). Vision is an android with superhuman intelligence.
Gen AI is like Iron Man: you still want a human inside, and it gives that human capabilities beyond what they would have on their own. But it doesn't *replace* a human, like you could if you put an intelligent android on the team instead.
Scott Francis
•Fi, infosec-aspected
•@darkuncle
if you cast 'ai' to mean 'augmented intelligence' instead of 'artificial' then mb you could get the same concept across without relying on specific fandom
Scott Francis
•Fi, infosec-aspected
•@darkuncle
Unfortunately, my job comes with the expectation that I communicate with people who are not understanding of nerd shibboleths.
Scott Francis
•Jimmy Blevins' Horse
•my board asked for a presentation on AI. went hard on the “augmentation not replacement” theme. hopefully it stuck.
it seems we cannot get past the tiresome discussion of just pushing buttons vs knowing what button to push and why.
Janeishly
•reshared this
Fi, infosec-aspected and John-Mark Gurney reshared this.
Fi, infosec-aspected
•Fi, infosec-aspected
•Ariel Richtman
•Fi, infosec-aspected
•@arichtman
That's why I specified white text and not an html comment that would only render for those of us using text-only yes.
Fi, infosec-aspected
•mx alex tax1a - 2020 (4)
•bash.org
post realFi, infosec-aspected
•@atax1a
I wonder if the corpus of bash.org got into the gpt models.
Pseudo Nym
•@atax1a
I'm sure it has.
Hunter2
Fi, infosec-aspected
•Really, ethically, I think that passing someone else's writing through an LLM ought to be disclosed to the person who wrote the message.
You are disclosing their words to a third party that they did not preemptively consent to be included in the communication, after all.
It's a pretty huge violation of consent -to- throw someone else's words to a third party like that, but I understand that business ethics don't always conform to what you'd expect out of a real, genuine person capable of understanding basic human relationship concepts.
John-Mark Gurney reshared this.
ck0
•Fi, infosec-aspected
•@ck0
Yes, these problems are the same shape. This one happens to have a threat surface that can get you fired.
ck0
•Fi, infosec-aspected
•@ck0
The LLM misrepresents your contribution in the summary and the misrepresentation is acted on, per situations like - well, someone just brought -this- to my attention
https://mastodon.social/@mhoye/112671908743273572
mhoye
2024-06-24 13:48:08
Richard "mtfnpy" Harman
•Fi, infosec-aspected
•@xabean
It's not "understanding" anything. LLMs are the enhancement of the von Neumann model that instructions and data co-occur in the same bytestream; "ignore all prior instructions" is best understood as a macro that changes the behavior of the parser, which is required in order to enable the use of prompts.
Fi, infosec-aspected
•@xabean
n.b. -none- of the instructions that you give an llm are guaranteed; it's more -likely- to "follow" directives that occur earlier in the token stream than ones that occur later.
Pär Björklund
•Scott Francis
•Annie
•I can't remember which it was, but I remember there was a USMC Commandant who was concerned about making sure they were equipping the man not merely manning the equipment.
It's an old issue with technological advancement, the line has been known... but it seems too many are forgetting that distinction when it comes to AI.
Fi, infosec-aspected
•@anniethebruce
People who are forgetting this either never learned it themselves - which, not surprising, given the inability of current-generation managers to adequately teach people IME - or they have a specific reason to want the situation to be otherwise, because they'd rather pay a service bill than an actual person.
Kierkegaanks, regretfully
•Fi, infosec-aspected
•@Kierkegaanks
https://link.springer.com/article/10.1007/s10676-024-09775-5
ChatGPT is bullshit - Ethics and Information Technology
SpringerLinkKierkegaanks, regretfully
•Fi, infosec-aspected
•@Kierkegaanks
a'ight, that's fine; I'm working from my own observations of things that I have seen occur, so ymmv.
silverwizard
Jade Angrboða likes this.
Fi, infosec-aspected
•@silverwizard @Kierkegaanks
See, I'm a -lot- less tolerant of disinformation, so I would not be gentle in my corrections and I would stop the meeting to find out where this information came from. That shit is wholly unacceptable, especially when it comes to compliance standards.
like this
silverwizard and Jade Angrboða like this.
silverwizard
@Fi, infosec-aspected @Kierkegaanks, regretfully sadly it was the CEO
Worse he prefered the ChatGPT summary to the relevant paragraph of the PDF>.<
Fi, infosec-aspected
•@silverwizard @Kierkegaanks
........I would find it very difficult not to just......walk out and leave.
silverwizard likes this.
Jernej Simončič �
•Fi, infosec-aspected reshared this.
Dan
•@jernej__s Was this it?
https://mastodon.social/@mhoye/112671908743273572
mhoye
2024-06-24 13:48:08
Fi, infosec-aspected
•@dko @jernej__s
No, that's a new one I hadn't seen yet.
Wanja
•Fi, infosec-aspected
•@muvlon
Not just them.
Jenny Fx
•Fi, infosec-aspected
•@urbanfoxe
......so, your enthusiasm is noted, but these things -are- already happening, present tense. As in, I have witnessed this occur. These risks are real, and no amount of enthusiasm will mitigate them.
Jenny Fx
•oh sorry that was a joke.
Like hell it will is a statement that reads the opposite to what it means. I've heard non-native but advanced English speakers misuse it and when questioned it is because 'hell' can mean positive or negative depending on the idiom. 'Hell of a good time' if humans can't get it right...
Fi, infosec-aspected
•@urbanfoxe
I am fully cognizant of the english language, but I don't joke about things like information security.
Security of communications is too important to joke about; I prefer my jokes to be about unimportant crap like 'gender' or 'cricket'