Easy fix: don’t buy this garbage to begin with. It’s terrible for the environment, terrible for your privacy, of dubious value to begin with.
If every man is an onion, one of my deeper layers is crumudgeon. So take that into account when I say fuck all portable speakers. I’m so tired of hearing everyone’s shitty noise. Just fucking everywhere. It takes one person feeling entitled to blast the shittiest music available to ruin everyone in a 500yd radius’s day. If this is you, I hope you stub your toe on every coffee table, hit your head on every door jam, miss every bus.
I have a Google home. The only reason I have it is because Spotify gave them away for free back in 2019. It sits unplugged somewhere.
People are saying don’t get an echo but this is the tip of an iceberg. My coworkers’ cell phones are eavesdropping. My neighbors doorbells record every time I leave the house. Almost every new vehicle mines us for data. We can avoid some of the problem but we cannot avoid it all. We need a bigger, more aggressive solution if we are going to have a solution at all.
Publicly, that is. They have no doubt been doing it in secret since they launched it.
Off-device processing has been the default from day one. The only thing changing is the removal for local processing on certain devices, likely because the new backing AI model will no longer be able to run on that hardware.
With on-device processing, they don’t need to send audio. They can just send the text, which is infinitely smaller and easier to encrypt as “telemetry”. They’ve probably got logs of conversations in every Alexa household.
This has always blown my mind. Watching people willingly allow Big Brother-esque devices into their home for very, very minor conveniences like turning on some gimmicky multi-colored light bulbs. Now they’re literally using home “security” cameras that store everything on some random cloud server. I’ll truly never understand.
Why has no security researcher published evidence of these devices with microphones uploading random conversations? Nobody working on the inside has ever leaked anything regarding this potentially massive breach of privacy? A perfectly secret conspiracy by everyone involved?
We know more about top secret NSA programs than we do about this proposed Alexa spy mechanism. None of the people working on this at Amazon have wanted to leak anything?
I’m not saying it’s not possible, but it seems extremely improbable to me that everyone’s microphones are listening to their conversations, they’re being uploaded somewhere to serve them better ads, and absolutely nobody has leaked anything or found any evidence.
Nobody working on the inside has ever leaked anything regarding this potentially massive breach of privacy? A perfectly secret conspiracy by everyone involved?
It’s better to be safe than sorry is all I’m saying.
Edit: There’s also this.
I mean… I 100% agree, and yet you and I and everyone reading this are carrying around a phone that can do the exact same shit
If you look at the article, it was only ever possible to do local processing with certain devices and only in English. I assume that those are the ones with enough compute capacity to do local processing, which probably made them cost more, and that the hardware probably isn’t capable of running whatever models Amazon’s running remotely.
I think that there’s a broader problem than Amazon and voice recognition for people who want self-hosted stuff. That is, throwing loads of parallel hardware at something isn’t cheap. It’s worse if you stick it on every device. Companies — even aside from not wanting someone to pirate their model running on the device — are going to have a hard time selling devices with big, costly, power-hungry parallel compute processors.
What they can take advantage of is that for a lot of tasks, the compute demand is only intermittent. So if you buy a parallel compute card, the cost can be spread over many users.
I have a fancy GPU that I got to run LLM stuff that ran about $1000. Say I’m doing AI image generation with it 3% of the time. It’d be possible to do that compute on a shared system off in the Internet, and my actual hardware costs would be about $33. That’s a heckofa big improvement.
And the situation that they’re dealing with is even larger, since there might be multiple devices in a household that want to do parallel-compute-requiring tasks. So now you’re talking about maybe $1k in hardware for each of them, not to mention the supporting hardware like a beefy power supply.
This isn’t specific to Amazon. Like, this is true of all devices that want to take advantage of heavyweight parallel compute.
I think that one thing that it might be worth considering for the self-hosted world is the creation of a hardened network parallel compute node that exposes its services over the network. So, in a scenario like that, you would have one (well, or more, but could just have one) device that provides generic parallel compute services. Then your smaller, weaker, lower-power devices — phones, Alexa-type speakers, whatever — make use of it over your network, using a generic API. There are some issues that come with this. It needs to be hardened, can’t leak information from one device to another. Some tasks require storing a lot of state — like, AI image generation requires uploading a large model, and you want to cache that. If you have, say, two parallel compute cards/servers, you want to use them intelligently, keep the model loaded on one of them insofar as is reasonable, to avoid needing to reload it. Some devices are very latency-sensitive — like voice recognition — and some, like image generation, are amenable to batch use, so some kind of priority system is probably warranted. So there are some technical problems to solve.
But otherwise, the only real option for heavy parallel compute is going to be sending your data out to the cloud.
Having per-household self-hosted parallel compute on one node is still probably more-costly than sharing parallel compute among users. But it’s cheaper than putting parallel compute on every device.
Linux has some highly-isolated computing environments like seccomp that might be appropriate for implementing the compute portion of such a server, though I don’t know whether it’s too-restrictive to permit running parallel compute tasks.
In such a scenario, you’d have a “household parallel compute server”, in much the way that one might have a “household music player” hooked up to a house-wide speaker system running something like mpd or a “household media server” providing storage of media, or suchlike.
People seem upset about this. I’m over here wondering wtf is an echo?
Today: “…they will be deleted after Alexa processes your requests.”
Some point in the not-so-distant future: “We are reaching out to let you know that your voice recordings will no longer be deleted. As we continue to expand Alexa’s capabilities, we have decided to no longer support this feature.”
“We lied and paid a $3M fine.”
And finally “We are reaching out to let you know Alexa key phrase based activation will no longer be supported. For better personalization, Alexa will always process audio in background. Don’t worry, your audio is safe with us, we highly care about your privacy.”
They could also transcribe the recording and only save that. I mean they absolutely will and surely already did do that.
In the age of techno-fascism, the people willingly pay to install the listening devices into their own homes.
It’s always been this way for the cheap speakers. They’ve no processing power on-board and need the cloud just to tell you the time.
And people wonder why I never bought any of these kinds of things.
Now they can hear me scream “shut the fuck up Alexa!!!” every time she says “…by the way…” when I just want to know what time it is.
Me while cooking mac and cheese for the kids:
“Echo, set timer for 8 minutes”Echo: “
GOOD EVENING [me], SETTING TIMER FOR 8 MINUTES
”No, shut the fuck up and just set the goddamn timer without the extra fluff. I’ve seen Ex Machina, I know you have no empathy, so knock off the “nice” shit and do what I fucking ask without anything else.
There are a few settings that make it better. Like enabling “brief mode” or something like that
I have brief mode on, she doesn’t give a shit. I need “say the absolute minimum number of words” mode.
Say this: “Alexa, disable by the way”
“Alexa, from now on, call me ‘Big Dick Daddy from Cincinnati’.”
Wait hold on
I wonder if I can get the Google assistant British lady to call me that
Edit: Lmfao it works
To the recycling bin you go, Alexa
My family has one in most rooms of our house…ugh
Everything you say to your Echo…
I don’t have an Echo.
Removed by mod
Alexa, call 911. OP is having a stroke
No, that’s just good ol’ dementia.
Removed by mod
What?
Removed by mod
Are you a bad bot? He asked “what” because we can literally not understand what you mean.
Stop being a Nazi
deleted by creator