Programmer, graduate student, and gamer. I’m also learning French and love any opportunity to practice :)

  • 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 1st, 2023

help-circle
  • You have to be explicit about which module you’re using at all times, even though 99% of the time only one could apply. When the type class resolution is unique, but complicated, there’s no mental overhead for the Haskell programmer but getting all the right modules is a lot of overhead for the OCaml programmer. It also lets us write functions that are polymorphic under a class constraint. In OCaml you have to explicitly take a module argument to do this. If you want to start composing such functions, it gets tedious extremely fast.

    And then even once you’re using a module, you can’t overload a function name. See: + vs +.. Basically modules and type classes solve different problems. You can do some things with modules that you cannot ergonomically do with type classes, for example. create a bit-set representation of sets of integers, and a balanced search tree for sets of other types, and expose that interface uniformly from the same module functor. But Haskell has other ways to achieve that same functionality and more.

    OCaml’s type system cannot replicate the things you can do with Haskell’s higher kinded types, type families, or data kinds at all (except for a fraction of Haskell’s GADTs).


  • Largely reasonable?

    Haskell is not good for systems programming which sums up about 60-70% of that post. Laziness is lovely in theory but many industry uses of Haskell use stricthaskell for all or most of their code, so I certainly agree with that part too.

    Their largest complaint about using Haskell for small non-systems programs seems to be the mental overhead induced by laziness. But for me, for small programs where performance isn’t a huge concern (think Advent of code or a script for daily life) laziness reduces my mental overhead. I think that author is just especially concerned about having a deep understanding of their programs’ performance because of their systems background. I worry about performance when it becomes relevant. Debugging Haskell performance issues is certainly harder than strict languages but still totally doable.

    The lack of type classes or other form of ergonomic overloading in OCaml is easily the single “feature” most responsible for the language never taking off.






  • AbelianGrape@beehaw.orgtoProgramming@programming.devCode Smells Catalog
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 months ago

    “Monadic type” has something like three meanings depending on context, and it’s not clear which one you mean. One of them is common in math, but not so common in programming, so probably not that. But neither “parametric types with a single argument” nor “types that encode a category-theoretic monad” have the property you say, as far as I know.

    I imagine you’re probably referring to the latter, since the optional monad exists. That’s very different from returning null. The inhabitants of Integer in Java, for example, are the boxed machine ints and null. The inhabitants of Optional[Integer] (it won’t let me use angle brackets here) are Optional.of(i) for each machine int i, Optional.empty(), and null.

    Optional.empty() is not null and should not be called a “Null object.” It’s also not of type Integer, so you’re not even allowed to return it unless the function type explicitly says so. Writing such function types is pretty uncommon to do in java programs but it’s more normal in kotlin. In languages like Haskell, which don’t have null at all, this is idiomatic.


  • I’ve only ever seen “one-time” in cryptography to refer to One-Time Pads (OTP). They are literally uncrackable (because every possible plaintext could be encoded by every possible ciphertext) but they achieve that by using a shared private key. The cipher becomes attackable if the key is re-used, hence the “one-time.”

    But that key has to be exchanged somehow, and that exchange can be attacked instead. Key exchange algorithms can’t necessarily transfer every possible OTP which means eavesdropping on the exchange would make an OTP attackable. So the best option we know of that doesn’t require secret meetings to share OTPs* really is to use RSA encryption. Once we have efficient quantum-resistant schemes, they’ll be the best option we know.

    * and let’s be honest, secret meetings can be eavesdropped on as well.




  • I’m a computer scientist mainly but with a heavy focus/interest in computer architecture. My plan is to teach at a university at this point - but it seems to me like that would be a good place to create completely open standards technology from.^1Specifically because if the point isn’t to make money, there’s no reason to create walled gardens.

    There’s certainly enough interest from people who want to be able to build their own systems. What would actually worry me isn’t the ability to make a new open standard or any of that. It’s that AMD64 is very hard to compete with in this space, because the processors are just faster, and there is so much x86 software that people who build PCs usually want access to.

    AMD64’s performance is the result of years and years of optimizations and patenting new hardware techniques, followed by aggressively litigating people trying to compete. ARM performance is catching up but ARM prefers licensing their core IP over making their own systems, making it harder for them to break into the PC space even if they want to.

    A new player would be in for a long, long time of unprofitable work just to compete with AMD64 - which most people are still happy with anyway.

    ^1 some others and I are actually working on some new ISA / open soft processors for it. However it is focused at an educational setting and unlikely to ever be used outside of embedded devices at most.


  • Only the number of shuffles is linear. Shuffling an array and marking/deleting correctly-placed elements still take linear time even with a “placement oracle.” It’s at best O(n^2) so the algorithm still wouldn’t be a good sorting algorithm.

    It’s like doing selection sort, except we’re selecting a random set of elements (from that poisson distribution) instead of the smallest one.


  • I’m not sure the median is what you want. The worst case behavior is unbounded. There is no guarantee that such an algorithm ever actually terminates, and in fact (with very low probability) it may not! But that means there is no well-defined median; we can’t enumerate the space.

    So let’s instead ask about the average, which is meaningful, as the increasingly high iteration-count datapoints are also decreasingly likely, in a way that we can compute without trying to enumerate all possible sequences of shuffles.

    Consider the problem like this: at every iteration, remove the elements that are in the correct positions and continue sorting a shorter list. As long as we keep getting shuffles where nothing is in the correct position, we can go forever. Such shuffles are called derangements, and the probability of getting one is 1/e. That is, the number of derangements of n items is the nearest integer to n!/e, so the probability of a derangement would be 1/n! * [n!/e]. This number converges to 1/e incredibly quickly as n grows - unsurprisingly, the number of correct digits is on the order of the factorial of n.

    We’re now interested in partial derangements D_{n,k}; the number of permutations of n elements which have k fixed points. D_{n,0} is the number of derangements, as established that is [n!/e]. Suppose k isn’t 0. Then we can pick k points to be correctly sorted, and multiply by the number of derangements of the others, for a total of nCk * [(n-k)!/e]. Note that [1/e] is 0, indeed, it’s not possible for exactly one element to be out of place.

    So what’s the probability of a particular partial derangement? Well now we’re asking for D_{n,k}/n!. That would be nCk/n! * [(n-k)!/e]. Let’s drop the nearest integer bit and call it an approximation, then (nCk * (n-k)!)/(n! * e) = 1/(k!*e). Look familiar? That’s a Poisson distribution with λ = 1!

    But if we have a Poisson distribution with λ = 1, then that means that on average we expect one new sorted element per shuffle, and hence we expect to take n shuffles. I’ll admit, I was not expecting that when I started working this out. I wrote a quick program to average some trials as a sanity check and it seems to hold.