• MintyFresh@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    56 minutes ago

    Easy fix: don’t buy this garbage to begin with. It’s terrible for the environment, terrible for your privacy, of dubious value to begin with.

    If every man is an onion, one of my deeper layers is crumudgeon. So take that into account when I say fuck all portable speakers. I’m so tired of hearing everyone’s shitty noise. Just fucking everywhere. It takes one person feeling entitled to blast the shittiest music available to ruin everyone in a 500yd radius’s day. If this is you, I hope you stub your toe on every coffee table, hit your head on every door jam, miss every bus.

    • unalivejoy@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      52 minutes ago

      I have a Google home. The only reason I have it is because Spotify gave them away for free back in 2019. It sits unplugged somewhere.

  • 52fighters@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 minutes ago

    People are saying don’t get an echo but this is the tip of an iceberg. My coworkers’ cell phones are eavesdropping. My neighbors doorbells record every time I leave the house. Almost every new vehicle mines us for data. We can avoid some of the problem but we cannot avoid it all. We need a bigger, more aggressive solution if we are going to have a solution at all.

    • SpaceNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      arrow-down
      2
      ·
      3 hours ago

      Off-device processing has been the default from day one. The only thing changing is the removal for local processing on certain devices, likely because the new backing AI model will no longer be able to run on that hardware.

      • 4am@lemm.ee
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        1 hour ago

        With on-device processing, they don’t need to send audio. They can just send the text, which is infinitely smaller and easier to encrypt as “telemetry”. They’ve probably got logs of conversations in every Alexa household.

        • b1t@lemm.ee
          link
          fedilink
          English
          arrow-up
          9
          ·
          47 minutes ago

          This has always blown my mind. Watching people willingly allow Big Brother-esque devices into their home for very, very minor conveniences like turning on some gimmicky multi-colored light bulbs. Now they’re literally using home “security” cameras that store everything on some random cloud server. I’ll truly never understand.

          • deranger@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            17 minutes ago

            Why has no security researcher published evidence of these devices with microphones uploading random conversations? Nobody working on the inside has ever leaked anything regarding this potentially massive breach of privacy? A perfectly secret conspiracy by everyone involved?

            We know more about top secret NSA programs than we do about this proposed Alexa spy mechanism. None of the people working on this at Amazon have wanted to leak anything?

            I’m not saying it’s not possible, but it seems extremely improbable to me that everyone’s microphones are listening to their conversations, they’re being uploaded somewhere to serve them better ads, and absolutely nobody has leaked anything or found any evidence.

          • loie@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 minutes ago

            I mean… I 100% agree, and yet you and I and everyone reading this are carrying around a phone that can do the exact same shit

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      2 hours ago

      If you look at the article, it was only ever possible to do local processing with certain devices and only in English. I assume that those are the ones with enough compute capacity to do local processing, which probably made them cost more, and that the hardware probably isn’t capable of running whatever models Amazon’s running remotely.

      I think that there’s a broader problem than Amazon and voice recognition for people who want self-hosted stuff. That is, throwing loads of parallel hardware at something isn’t cheap. It’s worse if you stick it on every device. Companies — even aside from not wanting someone to pirate their model running on the device — are going to have a hard time selling devices with big, costly, power-hungry parallel compute processors.

      What they can take advantage of is that for a lot of tasks, the compute demand is only intermittent. So if you buy a parallel compute card, the cost can be spread over many users.

      I have a fancy GPU that I got to run LLM stuff that ran about $1000. Say I’m doing AI image generation with it 3% of the time. It’d be possible to do that compute on a shared system off in the Internet, and my actual hardware costs would be about $33. That’s a heckofa big improvement.

      And the situation that they’re dealing with is even larger, since there might be multiple devices in a household that want to do parallel-compute-requiring tasks. So now you’re talking about maybe $1k in hardware for each of them, not to mention the supporting hardware like a beefy power supply.

      This isn’t specific to Amazon. Like, this is true of all devices that want to take advantage of heavyweight parallel compute.

      I think that one thing that it might be worth considering for the self-hosted world is the creation of a hardened network parallel compute node that exposes its services over the network. So, in a scenario like that, you would have one (well, or more, but could just have one) device that provides generic parallel compute services. Then your smaller, weaker, lower-power devices — phones, Alexa-type speakers, whatever — make use of it over your network, using a generic API. There are some issues that come with this. It needs to be hardened, can’t leak information from one device to another. Some tasks require storing a lot of state — like, AI image generation requires uploading a large model, and you want to cache that. If you have, say, two parallel compute cards/servers, you want to use them intelligently, keep the model loaded on one of them insofar as is reasonable, to avoid needing to reload it. Some devices are very latency-sensitive — like voice recognition — and some, like image generation, are amenable to batch use, so some kind of priority system is probably warranted. So there are some technical problems to solve.

      But otherwise, the only real option for heavy parallel compute is going to be sending your data out to the cloud.

      Having per-household self-hosted parallel compute on one node is still probably more-costly than sharing parallel compute among users. But it’s cheaper than putting parallel compute on every device.

      Linux has some highly-isolated computing environments like seccomp that might be appropriate for implementing the compute portion of such a server, though I don’t know whether it’s too-restrictive to permit running parallel compute tasks.

      In such a scenario, you’d have a “household parallel compute server”, in much the way that one might have a “household music player” hooked up to a house-wide speaker system running something like mpd or a “household media server” providing storage of media, or suchlike.

  • DirkMcCallahan@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    2 hours ago

    Today: “…they will be deleted after Alexa processes your requests.”

    Some point in the not-so-distant future: “We are reaching out to let you know that your voice recordings will no longer be deleted. As we continue to expand Alexa’s capabilities, we have decided to no longer support this feature.”

    • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      And finally “We are reaching out to let you know Alexa key phrase based activation will no longer be supported. For better personalization, Alexa will always process audio in background. Don’t worry, your audio is safe with us, we highly care about your privacy.”

    • Eheran@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 hours ago

      They could also transcribe the recording and only save that. I mean they absolutely will and surely already did do that.

  • Lydia_K@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 hours ago

    In the age of techno-fascism, the people willingly pay to install the listening devices into their own homes.

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 hours ago

    It’s always been this way for the cheap speakers. They’ve no processing power on-board and need the cloud just to tell you the time.

  • CuddlyCassowary@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    3 hours ago

    Now they can hear me scream “shut the fuck up Alexa!!!” every time she says “…by the way…” when I just want to know what time it is.

    • lka1988@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      40 minutes ago

      Me while cooking mac and cheese for the kids:
      “Echo, set timer for 8 minutes”

      Echo: “GOOD EVENING [me], SETTING TIMER FOR 8 MINUTES

      No, shut the fuck up and just set the goddamn timer without the extra fluff. I’ve seen Ex Machina, I know you have no empathy, so knock off the “nice” shit and do what I fucking ask without anything else.

      • Beacon@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        1 hour ago

        There are a few settings that make it better. Like enabling “brief mode” or something like that

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    2 hours ago

    Everything you say to your Echo…

    I don’t have an Echo.