Subscription bombing and how to mitigate it

(bytemash.net)

70 points | by homelessdino 2 hours ago

12 comments

  • pqdbr 1 hour ago
    Recently we suffered a different kind of subscription bombing: a hacker using our 'change credit card' form to 'clean' a list of thousands credit cards to see which ones would go through and approve transactions.

    He ran the attack from midnight to 7AM, so there were no humans watching.

    IPs were rotated on every single request, so no rate limiter caught it.

    We had Cloudflare Turnstile installed in both the sign up form and in all credit card forms. All requests were validated by Turnstile.

    We were running with the 'invisble' setting, and switched back to the 'recommended' setting after the incident, so I don't know if this less strict setting was to blame.

    Just like OP, our website - to avoid the extra hassle on users - did not require e-mail validation, specially because we send very few e-mails.

    We never thought this could bite us this way.

    Every CC he tried was charged $1 as confirmation that the CC was valid, and then immediately refunded, erroring out if the CC did not approve this $1 transaction, and that's what he used. 10% of the ~2k requests went through.

    Simply adding confirmation e-mail won't cut it: the hacker used - even tough he did not need it - disposable e-mail addresses services.

    This is a big deal. Payment processors can ban you for allowing this to happen.

    • gib444 3 minutes ago
      Ouch. Just one credit card change per account?

      This is one of those levels of monitoring that only gets put in place after such an event. Eg whole subsystem analysis - the change card feature being used 1000s of times (well, proportional to scale) in 7 hours is a massive red flag

  • m132 1 hour ago
    It's a problem, but I really dislike the solution. Putting a website with known security issues behind Cloudflare's Turnstile is comparable to enforcing code signing—works until it doesn't, and in the meantime, helps centralize power around a single legal entitiy while pissing legitimate users off.

    The Internet was carefully designed to withstand a nuclear war and this approach, being adopted en masse, is slowly turning it into a shadow of its former self. And despite the us-east1 and multiple Cloudflare outages of last year, we continue to stay blind to this or even rationalize it as a good thing, because that way if we're down, then so are our competitors...

    • pverheggen 48 minutes ago
      I wouldn't call this "known security issues", it's an inherent problem with any signup or forgot password page.

      Also, I doubt this is going to be pissing users off since they added Turnstile in invisible mode, and selectively to certain pages in the auth flow. Already signed in users will not be affected, even if the service is down. This is way different from sites like Reddit who use their site-wide bot protection, which creates those interstitial captcha pages.

    • stingraycharles 1 hour ago
      So your solution would be to do nothing?

      Cloudflare is an excellent solution for many things. The internet was designed to withstand a nuclear war, but it also wasn’t designed for the level of hostility that goes on on the internet these days.

    • AussieWog93 1 hour ago
      Honestly I really like CloudFlare as a business. There's no vendor lock-in, just a genuine good product.

      If they turn around later and do something evil, literally all I need to do is change the nameserver to a competitor and the users of my website won't even notice.

    • colesantiago 1 hour ago
      And your solution is assume everyone on the internet is a good actor?

      How would you solve this at scale?

      • cuu508 1 hour ago
        How about a signup flow where the user sends the first email? They send an email to signups@example.com (or to a generated unique address), and receive a one-time sign-in link in the reply. The service would have to be careful not to process spoofed emails though.

        Another approach is to not ask for an email address at all, like here on HN.

        • whatevaa 23 minutes ago
          "The user just needs to be careful not to step on a landmine. Exact steps left as an exercise to the reader".

          Anybody can send email with all of the dmarc stuff, how do you "be careful" with spoofed email?

        • xwowsersx 39 minutes ago
          [dead]
  • HexDecOctBin 1 hour ago
    I was attacked in this way a couple of months back. I use a different email address for each account (of the pattern product@example.com), and use a separate address for Git commits (like git@example.com). It was this second one that was attacked and I ended up with some 500 emails within 12 hours. Fortunately, since I don't expect anyone to actually email me on the Git address, I just put up a filter to send them all to a separate folder to go over at my leisure.

    After 12 hours, the pace of emails came to a halt, and then I started receiving emails to made up addresses of a American political nature on the same domain (I have wildcard alias enabled), suggesting that someone was perhaps trying to vent some frustration. This only lasted for about half an hour before the attacker seems to have given up and stopped.

    Strangely, I didn't receive any email during the attack which the attacker might have been trying to hide. Which has left me confused at to the purpose of this attack in the first place.

    • chicagojoe 44 minutes ago
      I had this happen recently too, also not covering up any email activity (I combed through 3000+ spam emails).

      Double check that there are no forwarding rules added to your inbox and add some protection against a SIM swap.

      In my case, they didn't compromise any of my accounts but did attempt to open a new credit card so it would be worth double checking your credit reports.

  • znnajdla 1 hour ago
    I absolutely refuse to use BigTech gatekeepers or useless CAPTCHAS (any sufficiently advanced bot can get around any CAPTCHA anyway). We solved this at our startup by running names through a simple LLM filter - if the name is gibberish like Px2846skxojw just block the signup. Worked surprisingly well. Of course this is easy to get around if the bot knows what you’re doing. But bots look for easy targets, as long as there are enough vibe coded crap targets on the internet they’re not going to bother with circumventing a carefully designed app.
    • snowe2010 58 minutes ago
      Then you’re also blocking legitimate users that don’t want to be tracked and use services like iCloud Hide my Emails
    • steezeburger 35 minutes ago
      This doesn't seem like a very good solution to be honest. And why use an LLM for this? What if I want a legit random ass string as my username?
    • tholm 1 hour ago
      Using an LLM for this seems excessive when there are well established algorithms for detecting high entropy strings.
    • imiric 38 minutes ago
      So your solution is to deploy a black box that can be worked around with a basic lookup table for a single field?

      CAPTCHAs were never meant to work 100% of the time in all situations, or be the only security solution. They're meant to block lazy spammers and low-level attacks, but anyone with enough interest and resources can work around any CAPTCHA. This is certainly becoming cheaper and more accessible with the proliferation of "AI", but it doesn't mean that CAPTCHAs are inherently useless. They're part of a perpetual cat and mouse game.

      Like LLMs, they rely on probabilities that certain signals may indicate suspicious behavior. Sophisticated ones like Turnstile analyze a lot of data, likely using LLMs to detect pseudorandom keyboard input as well, so they would be far more effective than your bespoke solution. They're not perfect, and can have false positives, but this is unfortunately the price everyone has to pay for services to be available to legitimate users on the modern internet.

      I do share a concern that these services are given a lot of sensitive data which could potentially be abused for tracking users, advertising, etc., but there are OSS alternatives you can self-host that mitigate this.

    • mads_quist 1 hour ago
      Nice.
  • mads_quist 1 hour ago
    A good old Honey Pot helped us at All Quiet "a lot" with those attacks. Basically all attacks are remediated by this. No need for Cloudflare etc.
    • grey-area 1 hour ago
      Can you expand on that? A separate honey pot sign up page invisible to real users, or something else?
      • mads_quist 1 hour ago
        You add "hidden" inputs to your HTML form that are named like "First Name" or "Family Name". Bots will fill them out. You will either expect them to be empty or you fill by JavaScript with sth you expect. It's of course reverse-engineerable, but does the trick.
        • alexjurkiewicz 1 hour ago
          Doesn't that break password manager autofill?
        • bevr1337 59 minutes ago
          Do you test this against password managers? Seems like this approach could generate false positives
        • grey-area 1 hour ago
          Thanks, I’ve seen scripted attacks bypass this sort of hidden input unfortunately (perhaps human assisted or perhaps just ignoring hidden fields).
          • jaggederest 8 minutes ago
            They often do actually ignore truly hidden fields (input type=hidden) but if you put them "behind" an element with css, or extremely small but still rendered, many get caught. It's similar to the cheeky prompt injection attacks people did/do against LLMs.
          • mads_quist 1 hour ago
            Sure, it's really basic of course.
  • linolevan 1 hour ago
    Well written piece on an attack vector I'd never thought too hard about before. Thanks for elaborating on why sending an email or two to a random person helps an attacker achieve their goal. A lot of similar articles skip over details like that.
  • tariky 1 hour ago
    I had similar situation on WooCommerce shop. But it was much more signups per hour. Putting turnstile in front fixed problem.

    My conclusion is to move from WordPress software as fast as possible, every WordPress site I manage gets bombarded by bots.

    • somat 42 minutes ago
      Hell every non wordpress software I manage also gets bombarded by wordpress bots.(not really, I am stretching the term to refer to wordpress attack attempts for dramatic purpose. But that still ends up being about 99% of my personal site traffic)
  • Jean-Philipe 40 minutes ago
    Thanks for explaining this! I saw this happened to some of my We sites and I couldn't wrap my head around why someone would do this...
  • cuu508 1 hour ago
    > If a bot creates an account with someone else’s email, the victim gets one email, if they ignore it that’s the end of it. The welcome email and everything after it only fires once the user verifies.

    As a user, I would prefer no welcome email at all.

    • tomjen3 41 minutes ago
      Yeah, thats part of why I hate "login with SERVICE". The big benefit would be not spamming me, but they always insist on getting my email.

      There was a time were you would have to select "sign me up for your newsletter" then you had to uncheck it. Then you had to check to not get an email and now you don't even get that choice.

      And lately? You have to go dig through your email because you can't set a password (looking at you Claude), so you can't filter email.

    • devmor 1 hour ago
      Then there's no verification step, preventing the entire mechanism of you not getting spammed.
      • JoshTriplett 1 hour ago
        It sounds like cuu508 didn't want the post-verification welcome, as opposed to the one-time verification message.
        • cuu508 1 hour ago
          Correct.
          • sodapopcan 1 hour ago
            Yes, correct. When I clicked the link I was already welcomed by the welcome page (which is, for the most part, welcomed). But then why send me another email further welcoming me? I already feel welcomed! And don't give me any of that "because it works" BS (even though that is what you are going to say).

            (cuu508, "you" in this instance does not mean you)

  • msephton 1 hour ago
    How can an affected user recover from such an attack?
  • queenkjuul 1 hour ago
    I had my email stolen in such an attack, i still get random "you abandoned your cart!" Emails now and then, but luckily (?) they got my credit card at the same time and i cancelled it within minutes. So it's a little annoyance, but it doesn't really make sense to me that the flood works. At least not with American credit cards that are routinely flagging my own trips to microcenter lol

    Editing to add: almost 100% of these emails came from the same e-commerce product, I'll have to look up which. But every site i got an email from was running the same off the shelf template.

  • nubg 1 hour ago
    This post was written by AI, there are multiple clues.

    Author, why can you not use your own words?

    I am not sure what you meant to say, vs what is LLM garbage I could have prompted myself.

    • denismi 1 hour ago
      I am quite confident that the following was NOT LLM:

      > New users were signing up but not doing anything, they weren’t creating an org, a project, or a deployment, they just left an account sitting there.

      Surely the LLM version is:

      > New users were signing up but not doing anything; they weren't creating an org, a project, or a deployment—they just left an account sitting there.

      • nubg 54 minutes ago
        It really depends on the LLM and the wrapper prompt. There are many other giveaways though - which I am not going to name to burn them.
    • wdutch 1 hour ago
      I can't comment on if it was written by AI or not but I found the OP informative and quite dense with useful information. Nothing stood out to me as garbage.
      • nubg 1 hour ago
        I agree the topic and most of the content is legit!

        Which makes is even more annoying. Because you don't know which are the good bits where somebody is sharing his unique insight, and which is just taken from the LLMs world knowledge.

        • chii 50 minutes ago
          so you are merely just prejudiced against LLM generated content, even if it was good?

          Why not accept that it is good, and forget about it being LLM?