I don’t quite like that idea. It’s something I really hated on Reddit. It just discourages new people from joining. Besides, you could self host an instance with accounts claiming to be made in 1970.
Unfortunately there aren’t many great options right now. No one likes it, but people posting CSAM are the ones to blame there. They quite literally ruin it for everyone because they’re butthurt about something happening they didn’t like
Do we know what they are butthurt about? There is never an excuse for what they are doing, but I’m curious what happened to set it off if a reason I known
Nope, they’re too cowardly to use their actual accounts and are making them anonymously. All we know is that rather than being mature about a mod action and simply leaving and creating an account elsewhere they decided to do this.
And here’s the spot where I point out that using a blockchain for recording accounts would be a good technological fit for a decentralized system like the Fediverse, and then get pilloried for being a “cryptobro” or whatever.
Seriously, all that you’d need to use the blockchain for would be a basic record of “this account holder has this name on that instance” and you get all sorts of unspoofable benefits from that. No tokens, no fancy authentication if you don’t want it, just a distributed database that you can trust.
The issue that was being discussed was blocking accounts from posting if they were younger than a certain age. The blockchain has an unspoofable timestamp on its records.
I see. I’m not convinced that proving the account creation date makes much of a difference here. Obviously the instance records when you sign up, so you would only need this to protect against malicious instances. But if a spammer is manipulating their instance to allow them to spam more, you have a much bigger problem than reliably knowing their account creation date.
It’s a matter of trust. A random instance can always lie and you can only determine “that was a malicious instance that was lying to me” in hindsight after it’s broken that trust. Since a malicious instance-runner can spin up new instances almost as easily as creating new fake accounts you end up with a game of whack-a-mole where the malicious party can always get a few bad actions through before getting whacked. Whereas if user account creation was recorded on a blockchain you don’t need to ever trust the instance in the first place. You can always know for sure that an account is X days old.
A malicious instance-runner could still spin up fresh instances and fake accounts ahead of time, but it forces them to do it X days in advance and now if they want to keep attacking they have a longer delay time on it. A community that’s under attack could set the limit to 30 days, for example, and now the attacker is out of action for a full month until their next crop of fake instances is “ripe.” As always with these sorts of decentralized systems there’s tradeoffs and balances to be struck. The idea is to make things as hard for malicious users as possible without making it harder for the non-malicious ones in the process. Right now the cycle time for the whack-a-mole is “as fast as the attacker wants it to be” whereas with a trustworthy account age authentication layer the cycle time becomes “as slow as the target wants it to be.”
Thank you for writing the explanation! I still think that this doesn’t need a blockchain. Instances could broadcast user creation, so each instance could validate user age on its own (or ask other trusted instances when they first “saw” that user).
Fundamentally, blockchain solves the problem that there is no central source of trust, but in the Fediverse people necesarily trust the instance that they sign up, so a blockchain can’t add much in my opinion.
Instead of preempting criticism/downvotes, perhaps it would help to more clearly describe what kind of implementation of blockchain you mean?
If it would still involve some questionable consent mechanism that either consumes a large amount of energy (Proof-of-Work) or may benefit larger stakeholders (Proof-of-Stake), then even setting aside the cryptocurrency associations, I’m not sure it’s necessarily worth it. However, if I’m not mistaken, there are implementations that may not require those, but may still provide the sort of benefit you’re suggesting, aren’t there?
I’ve elaborated in some of the subsequent comments. I guess I wanted to “test the waters” a bit, if I got a strong negative reaction for simply mentioning a blockchain-based solution I would have sighed and moved on.
Proof-of-stake doesn’t benefit larger stakeholders any more than it benefits smaller stakeholders, the common “rich-get-richer” objection is based on a misunderstanding of how the economics of staking actually operates. Since every staker gets rewarded in exact proportion to the size of their stake the large stakers and small stakers grow at the same relative rates. It’s actually proof-of-work that has an inherent centralization pressure due to the economies of scale that come from running large mining farms.
Proof-of-stake doesn’t benefit larger stakeholders any more than it benefits smaller stakeholders, the common “rich-get-richer” objection is based on a misunderstanding of how the economics of staking actually operates.
That wasn’t what I was referring to, but I should have phrased that part of my comment better. When I wrote that it may benefit larger stakeholders more what I had meant was that, by my rough understanding, larger stakeholders have more influence or sway over the consent mechanism. It’s been awhile since I looked into it last, so I can’t remember the details exactly, but that’s what I recall of what I read.
It wasn’t the rich-get-richer problem, so much as the rich-hold-outsized-influence problem. Similar but distinct.
It may be counterintuitive, but stakers don’t actually have influence over the consensus mechanism. It’s actually the other way around. Consider it this way; the stake that a staker puts up is a hostage that the staker is providing to the blockchain. If I stake a million dollars worth of Ether, I’m basically telling the blockchain’s users “you can trust me to process blocks correctly because if I fail to do so you can destroy my million dollar stake.” I have a million dollars riding on me following the blockchain’s rules. That’s literally why it’s called a “stake.”
The people who are actually “in charge” of which consensus rules are in use are the userbase as a whole, the ones who pay transaction fees and give Ether value by purchasing it from the validators. If some validators were to go rogue and create a fork that was to their liking but not to the liking of the userbase, the rogue validators would be holding worthless tokens on a blockchain that nobody is using. You can see the effects of this by the way the blockchain is continuing to update in ways that are good for the general userbase but not necessarily for the validators - MEV-burn, for example, is a proposal that would reduce the amount of money that validators could make but there’s no concern that I’ve seen about the validators somehow “rejecting” it. If the userbase wants it the validators can’t reject it without losing much more than they could hope to gain.
Ironically, proof-of-work is more vulnerable to this kind of thing. If a proof-of-work chain were to fork and a substantial majority of the validators didn’t agree with the fork then they could attack it with 51% attacks. The forked chain would need to change its PoW algorithm to stop the attacks, and that would destroy all the “friendly” miners along with the attackers.
Validators in a PoS blockchain could also launch attacks at a contentious fork, but they’d burn their stake in the process whereas the validators that did what the userbase wanted would keep theirs. So there’s a powerful incentive to just go along with the userbase’s desires.
I’m not saying you’re wrong but why would this be the first time blockchain stopped illegal activity instead of facilitating it? It like 15 year-old tech and hasn’t made a significant impact outside of niche projects like cryptocurrencies.
To the first, there are a vast number of legal applications for blockchains.
To the second, it’s not the same tech as it was 14 years ago. There have been a lot of advancements over that period.
If you trace ActivityPub’s lineage back to its origin, it’s 14 years old too - it started as OpenMicroBlogging in 2009. It then became OStatus, which became standardized as ActivityPub. It’s barely the same thing any more. The same thing has happened with blockchains, the version of Bitcoin that launched in 2009 is nothing like the cutting-edge stuff like Ethereum is these days.
Putting aside that this use case doesn’t meet the five requisites for block chain use, the fediverse in general and Lemmy is already struggling with too much data being stored and moved.
How do you store data in a decentralised way without have many redundant copies? The decentralisation of Blockchain is from many machines maintaining their own copy of the entire history. The entire xo dept I herebtly stores more data. Your suggestion is to literally store more data, claiming it won’t store more data only suggests you don’t know how blockchain works.
And that’s not even including any overhead of implementing a Blockchain in the first place. Or the fact you’ll be storing data on literally every user even if they never interact with your instance, pr even if their instance is entirely blocked from yours. And there’s no way around that, if you do manage to selectively store some subset of users then when you do need to include that data you’re trusting the subset of maintainers who do have that user’s data which, initially, is only the user’s home instance so we’re back to square one.
Yes, my point is that that sort of thing is exactly what blockchains are for. They handle all of that already. So there’s no need for Fediverse servers to reinvent all of that, they can just use existing blockchains for it.
As someone (who’s not a fan of the fediverse) put it to me:
Fediverse is web2.5, worst of both web2.0 and web3.0.
I think there’s something to that. So instilled in the fediverse’s makers is web2.0 that I’m not entirely sure their solutions can be trusted in the long term.
It makes sense that down the line, when bitcoin and crypto hype finally settles into knowing what’s actually useful, some sort of cryptographic mechanisms will become normal in decentralised tech. BlueSky may make this mainstream.
That’d be nice. Personally, I think the tech is just about ready - Ethereum has solved its environmental issues with proof-of-stake, and has solved its transaction cost issues with rollup-based “layer 2” blockchains. At this point I think the main obstacle is the knee-jerk popular reaction to anything blockchain-related as being some kind of crypto scam. I’m actually quite pleasantly surprised that I haven’t been downvoted through the floor for talking about this here so perhaps there’s a light at the end of the tunnel.
I personally have the knee-jerk reaction. I don’t understand anything you’re saying about blockchain. I’ve heard of farcaster (if you haven’t you might be interested) and nostr (ditto) but don’t know how they work.
The lack of mega downvotes, I’d guess, comes from the fact that people here appreciate the value of decentralisation and also can imagine from experience that a better system is possible than the relatively clumsy “let’s just send copies and requests everywhere”.
In the end I don’t know. But I can see the decentralised social web being where cryptographic technology finds its mainstream landing (BlueSky, like I said, being an interesting space to watch as its the middle ground on that front).
I could try explaining in more accessible terms, if you like. I actually enjoy discussing this stuff but I don’t want to derail the thread or sound like I’m evangelizing.
I think solutions like this are best handled entirely on the back end, the general user wouldn’t even need to know a blockchain was involved. The blockchain would just be a data provider that the instance software is using behind the scenes to track stuff. Just like how a general user has no need to understand how the HTTPS protocol actually operates, they just point their web browser at an address and the technical details are handled behind the scenes.
If you wanna explain stuff … go ahead! I’ll read it! You may find yourself writing something that belongs in its own post (perhaps just in a technology community) which you can then link to here.
Are new instances automatically federated? If not, then it seems like making an instance, then hosting content enough to be federated, would be an awful waste of time and money, as I’d expect an instance like that would be quickly defederated.
Somewhat. All the communities have to be looked up manually by users, and followed to continue federating the content into that instance.
But for this purpose the answer is yes. At least as far as I know, you can immediately start posting to other instances. Otherwise private instances would be of no use.
I don’t like it either. Age/karma requirements work under an inherently flawed idea, that you’re guilty (i.e. a shitposter) unless proved contrariwise (by using an old or karma-ful enough account), and damn easy to avoid if you’re determined to shit on a community.
IMO better ideas revolve around
Decreasing the surface of attack. In this case: only text posts allowed, there’s barely any legitimate reason to allow image posts here anyway.
Proper tools so mods can upstream rule violation to the admins. I’m almost certain that admins can see the IP of the posters, they should use that info to ban the posters alongside it. Perhaps in some situations the mods could even be granted temporary rights to see the IP of the posters? (Just an idea.)
Proper tools so mods have an easier time spotting potentially problematic content.
Sadly they all depend on the software, and Lemmy isn’t exactly known for having good mod tools.
They need to use cookies that attach a unique identifier to each machine to enforce bans per machine. Hash the cookie so it can’t be edited. If a user clears their cookies, they need to put in a special private key to get back into their account.
Or just make users scan in ID or pay with a credit card to gain membership.
None of those ideas are perfect but they are needed for better ban enforcement overall anyway.
That would need to be a bot. The problem is that the spammer would just move on to the next community (which they have just done by moving to [email protected] I just put a tool up that automatically notifies a bunch of admins, mods and community team members when a post get’s reported more than 3 times, so please report the posts if you see them.
Preventing any posting in general might be a bit too restrictive IMO. However I think new users, or users using VPNs probably should not be allowed to post images in general so freely.
I believe lemm.ee has a minimum account age limit before users can upload directly to the instance, and dbzer0 scans all user uploaded images for anything that could be questionable.
Perhaps there should be additional restrictions on stuff linking to images outside of lemmy? I blocked the domain within moments of it appearing on my feed, absolutely disgusting
Is there a way AskLemmy and other major communities could prevent new users from making posts in the future?
Like an account has to be over a month old to post for example. Maybe that could help prevent these kinds of disgusting attacks
I don’t know if Lemmy has a moderator tool available that could do something like that though.
I don’t quite like that idea. It’s something I really hated on Reddit. It just discourages new people from joining. Besides, you could self host an instance with accounts claiming to be made in 1970.
Unfortunately there aren’t many great options right now. No one likes it, but people posting CSAM are the ones to blame there. They quite literally ruin it for everyone because they’re butthurt about something happening they didn’t like
Do we know what they are butthurt about? There is never an excuse for what they are doing, but I’m curious what happened to set it off if a reason I known
Nope, they’re too cowardly to use their actual accounts and are making them anonymously. All we know is that rather than being mature about a mod action and simply leaving and creating an account elsewhere they decided to do this.
Gotcha. Thanks.
Good point. I didn’t think about how easy that would be to fake.
That said I would still prefer it to some subreddit’s cryptic karma requirements. If it worked I mean.
And here’s the spot where I point out that using a blockchain for recording accounts would be a good technological fit for a decentralized system like the Fediverse, and then get pilloried for being a “cryptobro” or whatever.
Seriously, all that you’d need to use the blockchain for would be a basic record of “this account holder has this name on that instance” and you get all sorts of unspoofable benefits from that. No tokens, no fancy authentication if you don’t want it, just a distributed database that you can trust.
How would that help? A spam bot could just make lots of blockchain wallets.
what are the benefits? I struggle to come up with any benefits.
The issue that was being discussed was blocking accounts from posting if they were younger than a certain age. The blockchain has an unspoofable timestamp on its records.
I see. I’m not convinced that proving the account creation date makes much of a difference here. Obviously the instance records when you sign up, so you would only need this to protect against malicious instances. But if a spammer is manipulating their instance to allow them to spam more, you have a much bigger problem than reliably knowing their account creation date.
It’s a matter of trust. A random instance can always lie and you can only determine “that was a malicious instance that was lying to me” in hindsight after it’s broken that trust. Since a malicious instance-runner can spin up new instances almost as easily as creating new fake accounts you end up with a game of whack-a-mole where the malicious party can always get a few bad actions through before getting whacked. Whereas if user account creation was recorded on a blockchain you don’t need to ever trust the instance in the first place. You can always know for sure that an account is X days old.
A malicious instance-runner could still spin up fresh instances and fake accounts ahead of time, but it forces them to do it X days in advance and now if they want to keep attacking they have a longer delay time on it. A community that’s under attack could set the limit to 30 days, for example, and now the attacker is out of action for a full month until their next crop of fake instances is “ripe.” As always with these sorts of decentralized systems there’s tradeoffs and balances to be struck. The idea is to make things as hard for malicious users as possible without making it harder for the non-malicious ones in the process. Right now the cycle time for the whack-a-mole is “as fast as the attacker wants it to be” whereas with a trustworthy account age authentication layer the cycle time becomes “as slow as the target wants it to be.”
Thank you for writing the explanation! I still think that this doesn’t need a blockchain. Instances could broadcast user creation, so each instance could validate user age on its own (or ask other trusted instances when they first “saw” that user).
Fundamentally, blockchain solves the problem that there is no central source of trust, but in the Fediverse people necesarily trust the instance that they sign up, so a blockchain can’t add much in my opinion.
Instead of preempting criticism/downvotes, perhaps it would help to more clearly describe what kind of implementation of blockchain you mean?
If it would still involve some questionable consent mechanism that either consumes a large amount of energy (Proof-of-Work) or may benefit larger stakeholders (Proof-of-Stake), then even setting aside the cryptocurrency associations, I’m not sure it’s necessarily worth it. However, if I’m not mistaken, there are implementations that may not require those, but may still provide the sort of benefit you’re suggesting, aren’t there?
I’ve elaborated in some of the subsequent comments. I guess I wanted to “test the waters” a bit, if I got a strong negative reaction for simply mentioning a blockchain-based solution I would have sighed and moved on.
Proof-of-stake doesn’t benefit larger stakeholders any more than it benefits smaller stakeholders, the common “rich-get-richer” objection is based on a misunderstanding of how the economics of staking actually operates. Since every staker gets rewarded in exact proportion to the size of their stake the large stakers and small stakers grow at the same relative rates. It’s actually proof-of-work that has an inherent centralization pressure due to the economies of scale that come from running large mining farms.
That wasn’t what I was referring to, but I should have phrased that part of my comment better. When I wrote that it may benefit larger stakeholders more what I had meant was that, by my rough understanding, larger stakeholders have more influence or sway over the consent mechanism. It’s been awhile since I looked into it last, so I can’t remember the details exactly, but that’s what I recall of what I read.
It wasn’t the rich-get-richer problem, so much as the rich-hold-outsized-influence problem. Similar but distinct.
It may be counterintuitive, but stakers don’t actually have influence over the consensus mechanism. It’s actually the other way around. Consider it this way; the stake that a staker puts up is a hostage that the staker is providing to the blockchain. If I stake a million dollars worth of Ether, I’m basically telling the blockchain’s users “you can trust me to process blocks correctly because if I fail to do so you can destroy my million dollar stake.” I have a million dollars riding on me following the blockchain’s rules. That’s literally why it’s called a “stake.”
The people who are actually “in charge” of which consensus rules are in use are the userbase as a whole, the ones who pay transaction fees and give Ether value by purchasing it from the validators. If some validators were to go rogue and create a fork that was to their liking but not to the liking of the userbase, the rogue validators would be holding worthless tokens on a blockchain that nobody is using. You can see the effects of this by the way the blockchain is continuing to update in ways that are good for the general userbase but not necessarily for the validators - MEV-burn, for example, is a proposal that would reduce the amount of money that validators could make but there’s no concern that I’ve seen about the validators somehow “rejecting” it. If the userbase wants it the validators can’t reject it without losing much more than they could hope to gain.
Ironically, proof-of-work is more vulnerable to this kind of thing. If a proof-of-work chain were to fork and a substantial majority of the validators didn’t agree with the fork then they could attack it with 51% attacks. The forked chain would need to change its PoW algorithm to stop the attacks, and that would destroy all the “friendly” miners along with the attackers.
Validators in a PoS blockchain could also launch attacks at a contentious fork, but they’d burn their stake in the process whereas the validators that did what the userbase wanted would keep theirs. So there’s a powerful incentive to just go along with the userbase’s desires.
I’m not saying you’re wrong but why would this be the first time blockchain stopped illegal activity instead of facilitating it? It like 15 year-old tech and hasn’t made a significant impact outside of niche projects like cryptocurrencies.
To the first, there are a vast number of legal applications for blockchains.
To the second, it’s not the same tech as it was 14 years ago. There have been a lot of advancements over that period.
If you trace ActivityPub’s lineage back to its origin, it’s 14 years old too - it started as OpenMicroBlogging in 2009. It then became OStatus, which became standardized as ActivityPub. It’s barely the same thing any more. The same thing has happened with blockchains, the version of Bitcoin that launched in 2009 is nothing like the cutting-edge stuff like Ethereum is these days.
Putting aside that this use case doesn’t meet the five requisites for block chain use, the fediverse in general and Lemmy is already struggling with too much data being stored and moved.
Searching for “the five requisites for blockchain use” isn’t finding anything relevant, what requisites do you mean?
This wouldn’t be storing more data, it would be storing existing data. It would just be putting it somewhere that can be globally read and verified.
How do you store data in a decentralised way without have many redundant copies? The decentralisation of Blockchain is from many machines maintaining their own copy of the entire history. The entire xo dept I herebtly stores more data. Your suggestion is to literally store more data, claiming it won’t store more data only suggests you don’t know how blockchain works.
And that’s not even including any overhead of implementing a Blockchain in the first place. Or the fact you’ll be storing data on literally every user even if they never interact with your instance, pr even if their instance is entirely blocked from yours. And there’s no way around that, if you do manage to selectively store some subset of users then when you do need to include that data you’re trusting the subset of maintainers who do have that user’s data which, initially, is only the user’s home instance so we’re back to square one.
Yes, my point is that that sort of thing is exactly what blockchains are for. They handle all of that already. So there’s no need for Fediverse servers to reinvent all of that, they can just use existing blockchains for it.
As someone (who’s not a fan of the fediverse) put it to me:
Fediverse is web2.5, worst of both web2.0 and web3.0.
I think there’s something to that. So instilled in the fediverse’s makers is web2.0 that I’m not entirely sure their solutions can be trusted in the long term.
It makes sense that down the line, when bitcoin and crypto hype finally settles into knowing what’s actually useful, some sort of cryptographic mechanisms will become normal in decentralised tech. BlueSky may make this mainstream.
That’d be nice. Personally, I think the tech is just about ready - Ethereum has solved its environmental issues with proof-of-stake, and has solved its transaction cost issues with rollup-based “layer 2” blockchains. At this point I think the main obstacle is the knee-jerk popular reaction to anything blockchain-related as being some kind of crypto scam. I’m actually quite pleasantly surprised that I haven’t been downvoted through the floor for talking about this here so perhaps there’s a light at the end of the tunnel.
I personally have the knee-jerk reaction. I don’t understand anything you’re saying about blockchain. I’ve heard of farcaster (if you haven’t you might be interested) and nostr (ditto) but don’t know how they work.
The lack of mega downvotes, I’d guess, comes from the fact that people here appreciate the value of decentralisation and also can imagine from experience that a better system is possible than the relatively clumsy “let’s just send copies and requests everywhere”.
In the end I don’t know. But I can see the decentralised social web being where cryptographic technology finds its mainstream landing (BlueSky, like I said, being an interesting space to watch as its the middle ground on that front).
I could try explaining in more accessible terms, if you like. I actually enjoy discussing this stuff but I don’t want to derail the thread or sound like I’m evangelizing.
I think solutions like this are best handled entirely on the back end, the general user wouldn’t even need to know a blockchain was involved. The blockchain would just be a data provider that the instance software is using behind the scenes to track stuff. Just like how a general user has no need to understand how the HTTPS protocol actually operates, they just point their web browser at an address and the technical details are handled behind the scenes.
If you wanna explain stuff … go ahead! I’ll read it! You may find yourself writing something that belongs in its own post (perhaps just in a technology community) which you can then link to here.
Are new instances automatically federated? If not, then it seems like making an instance, then hosting content enough to be federated, would be an awful waste of time and money, as I’d expect an instance like that would be quickly defederated.
Somewhat. All the communities have to be looked up manually by users, and followed to continue federating the content into that instance.
But for this purpose the answer is yes. At least as far as I know, you can immediately start posting to other instances. Otherwise private instances would be of no use.
What about new users and new instances requiring manual approval for posts?
I don’t like it either. Age/karma requirements work under an inherently flawed idea, that you’re guilty (i.e. a shitposter) unless proved contrariwise (by using an old or karma-ful enough account), and damn easy to avoid if you’re determined to shit on a community.
IMO better ideas revolve around
Sadly they all depend on the software, and Lemmy isn’t exactly known for having good mod tools.
Just the IP bans don’t sound good. CG-NAT, VPNs, public networks, school networks, etc… makes a lot of people share the same IP.
I’m aware that IP bans inconvenience users who did nothing wrong. But I feel like this can be alleviated:
But… well, we’re back into “lemmy needs better built-in mod tools” territory.
Then hosts need to ban VPNs.
They need to use cookies that attach a unique identifier to each machine to enforce bans per machine. Hash the cookie so it can’t be edited. If a user clears their cookies, they need to put in a special private key to get back into their account.
Or just make users scan in ID or pay with a credit card to gain membership.
None of those ideas are perfect but they are needed for better ban enforcement overall anyway.
Maybe. Some discussion going on at the moment about how to handle it.
Understood. Is that an option for moderators though?
Like I said I don’t know if Lemmy gives you that option or if you’d need to setup some kind of bot or an instance level option.
That would need to be a bot. The problem is that the spammer would just move on to the next community (which they have just done by moving to [email protected] I just put a tool up that automatically notifies a bunch of admins, mods and community team members when a post get’s reported more than 3 times, so please report the posts if you see them.
That’s smart. Glad to hear something like that exists
Preventing any posting in general might be a bit too restrictive IMO. However I think new users, or users using VPNs probably should not be allowed to post images in general so freely.
I believe lemm.ee has a minimum account age limit before users can upload directly to the instance, and dbzer0 scans all user uploaded images for anything that could be questionable.
Perhaps there should be additional restrictions on stuff linking to images outside of lemmy? I blocked the domain within moments of it appearing on my feed, absolutely disgusting
You’d have to generate a blacklist and maintain it, but also avoid bad faith mods and admins
i thought dbzer0 already had a tool for this