• 0 Posts
  • 36 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • the TSA is not “not perfect”; they’re a joke.

    it’s pure theater. they have basically no ability to detect actual weapons at all, hence why it’s a common problem when passengers arrive abroad only to find out they accidentally carried loose ammunition across borders.

    there’s a huge difference between “not quite perfect” and “completely and utterly useless waste of time, money, and resources”, the latter of which describes the TSA.

    IF they actually did anything useful at all, then fine, you have a point. but they don’t, which is why people are disagreeing with you.

    because in principle you’re right, that security is required and should be taken seriously…but the TSA isn’t actually providing security. they’re providing the appearance of security.


  • this is not true.

    it entirely depends on the specific application.

    there is no OS-level, standardized, dynamic allocation of RAM (definitely not on windows, i assume it’s the same for OSX).

    this is because most programming languages handle RAM allocation within the individual program, so the OS can’t allocate RAM however it wants.

    the OS could put processes to “sleep”, but that’s basically just the previously mentioned swap memory and leads to HD degradation and poor performance/hiccups, which is why it’s not used much…

    so, no.

    RAM is usually NOT dynamically allocated by the OS.

    it CAN be dynamically allocated by individual programs, IF they are written in a way that supports dynamic allocation of RAM, which some languages do well, others not so much…

    it’s certainly not universally true.

    also, what you describe when saying:

    Any modern OS will allocate RAM as necessary. If another application needs, it will allocate some to it.

    …is literally swap. that’s exactly what the previous user said.

    and swap is not the same as “allocating RAM when a program needs it”, instead it’s the OS going “oh shit! I’m out of RAM and need more NOW, or I’m going to crash! better be safe and steal some memory from disk!”

    what happens is:

    the OS runs out of RAM and needs more, so it marks a portion of the next best HD as swap-RAM and starts using that instead.

    HDs are not built for this use case, so whichever processes use the swap space become slooooooow and responsiveness suffers greatly.

    on top of that, memory of any kind is built for a certain amount of read/write operations. this is also considered the “lifespan” of a memory component.

    RAM is built for a LOT of (very fast) R/W operations.

    hard drives are NOT built for that.

    RAM has at least an order of magnitude more R/W ops going on than a hard drive, so when a computer uses swap excessively, instead of as very last resort as intended, it leads to a vastly shortened lifespan of the disk.

    for an example of a VERY stupid, VERY poor implementation of this behavior, look up the apple M1’s rapid SSD degradation.

    short summary:

    apple only put 8GB of RAM into the first gen M1’s, which made the OS use swap memory almost continuously, which wore out the hard drive MUCH faster than expected.

    …and since the HD is soldered onto the Mainboard, that completely bricks the device in about half a year/year, depending on usage.

    TL;DR: you’re categorically and objectively wrong about this. sorry :/

    hope you found this explanation helpful tho!


  • and your source measured the effects of one single area that cathartic theory is supposed to apply to, not all of them.

    your source does in no way support the claim that the observed effects apply to anything other than aggressive behavior.

    i understand that the theory supposedly applies to other areas as well, but as you so helpfully pointed out: the theory doesn’t seem to hold up.

    so either A: the theory is wrong, and so the association between aggression and sexuality needs to be called into question also;

    or B: the theory isn’t wrong after all.

    you are now claiming that the theory is wrong, but at the same time, the theory is totally correct! (when it’s convenient to you, that is)

    so which is it now? is the theory correct? then your source must be wrong irrelevant.

    or is the theory wrong? then the claim of a link between sexuality and aggression is also without support, until you provide a source for that claim.

    you can’t have it both ways, but you’re sure trying to.




  • not necessarily, but it can be a good idea to have a distributed, tamper proof ledger of transactions.

    that way anyone can provide proof for basically anything to do with the service: payment, drive, location, etc.

    it might also have advantages from a security perspective for riders and drivers.

    there are advantages, they’re not entirely necessary, but they may well be the best option for a distributed network (i.e.: no central server infrastructure, at least not beyond some simple software repository for downloads/updates)





  • Meaning what?

    meaning the models training data is what lets you work around or improve on that bias. without the training data, that’s (borderline) impossible. so in order to tweak models and further development, you need to know what exactly went into the model, or you’ll spend a lot of wasted time guessing around.

    I omitted requirements on freely sharing it as implied, but otherwise?

    you disregarded half of what makes an AI model. the half that actually results in a working model. without the training data, you’d only have some code that does…something.

    and that something is entirely dependent on the training data!

    so it’s essential, not optional, for any kind of “open source” AI, because without it you’re working with a black box. which is by definition NOT open source.


  • all models carry bias (see recent gemini headlines for an extreme example), and what exactly those are can range from important to extremely important, depending on the use case!

    it’s also important if you want to iterate on a model: if you use the same data set and train the model slightly differently, you could end up with entirely different models!

    these are just 2 examples, there’s many more.

    also, you are thinking of LLMs, which is just one kind of model. this legislation applies to all AI models, not just LLMs!

    (and your definition of open source is…unique.)






  • 9bananas@lemmy.worldtoGames@lemmy.worldWhat game fits this?
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    i don’t think so, but you can either entirely disable it, or make them passive, or tune it to your liking; there’s tons of customizability in the difficulty!

    it’s honestly some pretty smart design in how they handled it! you should give it a try, see if you like it!

    one little beginners tip that’s kinda important: they always choose the shortest path to your base (so pretty much any structure you build) and they attack based on your power consumption! (there’s a little widget that tells you when a wave is coming)




  • i looked it over and … holy mother of strawman.

    that’s so NOT related to what I’ve been saying at all.

    i never said anything about the advances in AI, or how it’s not really AI because it’s just a computer program, or anything of the sort.

    my entire argument is that the definition you are using for intelligence, artificial or otherwise, is wrong.

    my argument isn’t even related to algorithms, programs, or machines.

    what these tools do is not intelligence: it’s mimicry.

    that’s the correct word for what these systems are capable of. mimicry.

    intelligence has properties that are simply not exhibited by these systems, THAT’S why it’s not AI.

    call it what it is, not what it could become, might become, will become. because that’s what the wiki article you linked bases its arguments on: future development, instead of current achievement, which is an incredibly shitty argument.

    the wiki talks about people using shifting goal posts in order to “dismiss the advances in AI development”, but that’s not what this is. i haven’t changed what intelligence means; you did! you moved the goal posts!

    I’m not denying progress, I’m denying the claim that the goal has been reached!

    that’s an entirely different argument!

    all of the current systems, ML, LLM, DNN, etc., exhibit a massive advancement in computational statistics, and possibly, eventually, in AI.

    calling what we have currently AI is wrong, by definition; it’s like saying a single neuron is a brain, or that a drop of water is an ocean!

    just because two things share some characteristics, some traits, or because one is a subset of the other, doesn’t mean that they are the exact same thing! that’s ridiculous!

    the definition of AI hasn’t changed, people like you have simply dismissed it because its meaning has been eroded by people trying to sell you their products. that’s not ME moving goal posts, it’s you.

    you said a definition of 70 years ago is “old” and therefore irrelevant, but that’s a laughably weak argument for anything, but even weaker in a scientific context.

    is the Pythagorean Theorem suddenly wrong because it’s ~2500 years old?

    ridiculous.