I’d say if you have 80% of the requirements you might as well apply. I would frankly ignore years of experience more or less entirely.
I’d say if you have 80% of the requirements you might as well apply. I would frankly ignore years of experience more or less entirely.
It’s not exactly uncommon for a listing to advertise the person they want, but to accept applicants with significantly less on the basis that they can get there. Nearly every job I’ve ever got I was not at the level advertised in something or other.
The median is an average. But it isn’t the mean, which is presumably what the other comment was using.
ChatGPT is not designed to fool us into thinking it’s a human. It produces language with a specific tone & direct references to the fact it is a language model. I am confident that an LLM trained specifically to speak naturally could do it. It still wouldn’t be intelligent, in my view.
The Turing test is flawed, because while it is supposed to test for intelligence it really just tests for a convincing fake. Depending on how you set it up I wouldn’t be surprised if a modern LLM could pass it, at least some of the time. That doesn’t mean they are intelligent, they aren’t, but I don’t think the Turing test is good justification.
For me the only justification you need is that they predict one word (or even letter!) at a time. ChatGPT doesn’t plan a whole sentence out in advance, it works token by token… The input to each prediction is just everything so far, up to the last word. When it starts writing “As…” it has no concept of the fact that it’s going to write “…an AI A language model” until it gets through those words.
Frankly, given that fact it’s amazing that LLMs can be as powerful as they are. They don’t check anything, think about their answer, or even consider how to phrase a sentence. Everything they do comes from predicting the next token… An incredible piece of technology, despite it’s obvious flaws.
Writing boring shit is LLM dream stuff. Especially tedious corpo shit. I have to write letters and such a lot, it makes it so much easier having a machine that can summarise material and write it in dry corporate language in 10 seconds. I already have to proof read my own writing, and there’s almost always 1 or 2 other approvers, so checking it for errors is no extra effort.
It makes a lot more sense for me to make myself rich now than to make a version of myself rich who will never become me.
Newer laser can work on lighter hair, still less effective but possible nowadays.
LLMs are just predictive text but bigger
Oh, true, I didn’t look too close.
The “puzzle” isn’t the test, the test uses your browser history, mouse activity, etc to identify you as human (or not). The puzzle is used to generate training data for ML models.
It’s amazing what your phone can run, even better if you have a Bluetooth controller.
What??
S24 ultra, I have one for a similar reason, although I also really like a lot of its other features. While you can certainly get good photos with other phones, it is among the best on the market.
I was considering a £500-600 DSLR like I’ve had in the past, but ultimately I like to take photos when an opportunity arises, not just at the times I happen to have my expensive camera on me. If you take a lot of photos but you aren’t a professional, the best thing is a high end phone. Doubly so, because unless you’re very experienced at setting up the camera correctly for the conditions, your phone camera is almost certainly going to do a better job than you would on a manual camera.
So, in the end, rather than getting a cheaper phone and a camera, I combined the two. I know Samsung suck in a lot of ways, but when it comes to actually using my phone, they’re excellent compared to other brands I’ve tried.
If you took all the racists and bigots in the world and put them in one country… It still wouldn’t be justified to wipe them out. I wouldn’t want to go to that country - it would certainly be among the worst places in the world - but I also wouldn’t suggest we invade and start murdering them.
I am not surprised that it’s just ChataGPT in a box lol, not at all.
And let’s be honest, not because they were fascists.
AI models don’t resynthesize their training data. They use their training data to determine parameters which enable them to predict a response to an input.
Consider a simple model (too simple to be called AI but really the underlying concepts are very similar) - a linear regression. In linear regression we produce a model which follows a straight line through the “middle” of our training data. We can then use this to predict values outside the range of the original data - albeit will less certainty about the likely error.
In the same way, an LLM can give answers to questions that were never asked in its training data - it’s not taking that data and shuffling it around, it’s synthesising an answer by predicting tokens. Also similarly, it does this less well the further outside the training data you go. Feed them the right gibberish and it doesn’t know how to respond. ChatGPT is very good at dealing with nonsense, but if you’ve ever worked with simpler LLMs you’ll know that typos can throw them off notably… They still respond OK, but things get weirder as they go.
Now it’s certainly true that (at least some) models were trained on CSAM, but it’s also definitely possible that a model that wasn’t could still produce sexual content featuring children. It’s training set need only contain enough disparate elements for it to correctly predict what the prompt is asking for. For example, if the training set contained images of children it will “know” what children look like, and if it contains pornography it will “know” what pornography looks like - conceivably it could mix these two together to produce generated CSAM. It will probably look odd, if I had to guess? Like LLMs struggling with typos, and regression models being unreliable outside their training range, image generation of something totally outside the training set is going to be a bit weird, but it will still work.
None of this is to defend generating AI CSAM, to be clear, just to say that it is possible to generate things that a model hasn’t “seen”.
Yeah I don’t know what any of that means so I’m stuck with good ol’ daddy Samsung for now 😂
It’s called hackthebox not hackoutofthebox