• 0 Posts
  • 33 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle



  • I’d add it depends also on your field. If you spend a lot of time assembling technically bespoke solutions, but they are broadly consistent with a lot of popular projects, then it can cut through a lot in short order. When I come to a segment like that, LLM tends to go a lot further.

    But if you are doing something because you can’t find anything vaguely like what you want to do, it tends to only be able to hit like 3 or so lines of useful material a minority of the time. And the bad suggestions can be annoying. Less outright dangerous after you get used to being skeptical by default, but still annoying as it insists on re emphasizing a bad suggestion.

    So I can see where it can be super useful, and also how it can seem more trouble than it is worth.

    Claude and GPT have been my current experience. The best improvement I’ve seen is for the suggestions getting shorter. Used to have like 3 maybe useful lines bundled with a further dozen lines of not what I wanted. Now the first three lines might be similar, but it’s less likely to suggest a big chunk of code.

    Was helping someone the other day and the comic felt pretty accurate. It did exactly the opposite of what the user prompted for. Even after coaxing it to be in the general ballpark, it has about half the generated code being unrelated to the requested task, with side effects that would have seemed functional unless you paid attention and noticed that throughout would have been about 70% lower than you should expect. Was a significant risk as the user was in over their head and unable to understand the suggestions they needed to review, as they were working in a pretty jargon heavy ecosystem (not the AI fault, they had to invoke standard libraries that had incomprehensible jargon heavy syntax)


  • It’s sometimes useful, often obnoxious, sometimes both.

    It tends to shine on very blatantly obvious boilerplate stuff that is super easy, but tedious. You can be sloppy with your input and it will fix it up to be reasonable. Even then you’ve got to be careful, as sometimes what seems blatantly obvious still gets screwed up in weird ways. Even with mistakes, it’s sometimes easier to edit that going from scratch.

    Using an enabled editor that looks at your activity and suggests little snippets is useful, but can be really annoying when it gets particularly insistent on a bad suggestion and keeps nagging you with “hey look at this, you want to do this right?”

    Overall it’s merely mildly useful to me, as my career has been significantly about minimizing boilerplate with decent success. However for a lot of developers, there’s a ton of stupid boilerplate, owing to language design, obnoxiously verbose things, and inscrutable library documentation. I think that’s why some developers are scratching their heads wondering what the supposed big deal is and why some think it’s an amazing technology that has largely eliminated the need for them to manually code.








  • Well, the article is covering the disclaimer, which is vague enough to mean pretty much whatever.

    I can buy that he is taking it to the level of if it can’t directly be used for the stuff in the disclaimer, well, what could it be used for then? Crafting formulas seems to be a possibility, especially since the spreadsheet formula language is kind of esoteric and clumsy to read and write. It ‘should’ be up an LLM alley, a relatively limited grammar that’s kind of a pain for a human to work with, but easy enough to get right in theory for an LLM. LLM is sometimes useful for script/programming but the vocabulary and complexity can easily get away from it, but excel formula are less likely to have programming level complexity or arbitrarily many methods to invoke. You of course have to eyeball the formula to see if it looks right, and if it does screw up the cell parameters, that might be a hard thing to catch by eyeballing for most people.



  • Virtualization as a ‘platform’ was a bit overhyped, hence my ‘.ova’ comment. There was a push for a lot of applications to exclusively ship as a whole virtual machine, to create OS variants dedicated to the purpose of running single applications. For a lot of applications it was supremely awkward, because app developers ended up having to ‘own’ things they didn’t want to own, like the customer network configuration.

    Virtualization as a utility has of course persisted, but it’s much more rare for a vendor to declare their ‘runtime’ to be vmware than it once was. Virtualization existed in IBM for a long time, vmware made it broadly more available and flexible in the PC space, and then around mind 2000s things started to go a bit crazy with ‘virtualization is the runtime’.

    Now mind you, compared to dot-com or ‘big data’ it was trivial, but it was all a bit silly for a time there.




  • Yeah, but let’s say you had 12 guys hand scrubbing to keep up with the plates, but then you got a mediocre dishwashing machine that did a worse job scrubbing. You wouldn’t dismiss the machine because it was imperfect, you would say I need a dishwashing machine operator, who might have to do a quality check on the way out, or otherwise have whoever is plating put it in a stack for hand scrubbing, and lay off 11 of the guys.

    So this could be the way out if AI worked ‘as advertised’. It however largely does not.

    But then to the second point, it doesn’t even need to work as advertised if the business leader thinks it’s good enough and does the layoffs. They might end up having to scale back operations, but somehow it won’t be their fault.