It’s not like I don’t have a basic calculator to test the output, is it?
I might’ve also understated my python a little bit, as in I understand what the code does. Obviously you could break it, that wasn’t the point. I was more thinking that throwing math problems at what is essentially a language interpreter isn’t the right way to go about things. I don’t know shit though. I guess we’ll see.
If you want to learn how to code, writing a calculator with a ui isn’t a bad idea. But then you should code it yourself because otherwise you won’t learn much.
If you want to try and see if llms can write code that executes, then fine, you succeeded. I absolutely fail to see what you gain from that experiment though.
I’ve done a few courses and learned the basics, but it wasn’t until I started using some assistance that I got a deeper understanding of Python in general.
I came in very late, obviously, but I’ve still tried to learn coding on and off by myself since the late 90’s, although I ended up on another career path altogether. I’m in my 40’s and I’ve finally at least made some decent executable code.
Made myself a scalable clock since my eyes are failing, for example. It was a success and I use it daily. Would never have figured that out without some AI help. Still had to do some registry tweaking and shit since I’m stuck on windows on my workstation but it works wonderfully. Just a little widget but it improved my life greatly.
I’ve also cobbled together a workable alternative to notepad that I use as a diary of sorts. Never would’ve figured that out alone either.
As I see it at least whatever AI assistant you use at least doesn’t give one the gatekeeping or abuse one gets if they ask a relatively simple question somewhere else. Kinda like this, I guess.
TL;DR: In some situations our current 'AI’s can be helpful.
That might be the underlying problem. Software project management around small projects is easy. Anything that has a basic text editor and a Python interpreter will do. We have all these fancy tools because shit gets complicated. Hell, I don’t even like writing 100 lines without git.
A bunch of non-programmers make a few basic apps with ChatGPT and think we’re all cooked.
No doubt, I was merely suggesting that throwing math problems might not have been the intended use for what is essentially a language interpreter, obviously depending on the in question.
deleted by creator
It’s not like I don’t have a basic calculator to test the output, is it?
I might’ve also understated my python a little bit, as in I understand what the code does. Obviously you could break it, that wasn’t the point. I was more thinking that throwing math problems at what is essentially a language interpreter isn’t the right way to go about things. I don’t know shit though. I guess we’ll see.
I have no idea what you’re trying to say here.
If you want to learn how to code, writing a calculator with a ui isn’t a bad idea. But then you should code it yourself because otherwise you won’t learn much.
If you want to try and see if llms can write code that executes, then fine, you succeeded. I absolutely fail to see what you gain from that experiment though.
I’ve done a few courses and learned the basics, but it wasn’t until I started using some assistance that I got a deeper understanding of Python in general.
I came in very late, obviously, but I’ve still tried to learn coding on and off by myself since the late 90’s, although I ended up on another career path altogether. I’m in my 40’s and I’ve finally at least made some decent executable code.
Made myself a scalable clock since my eyes are failing, for example. It was a success and I use it daily. Would never have figured that out without some AI help. Still had to do some registry tweaking and shit since I’m stuck on windows on my workstation but it works wonderfully. Just a little widget but it improved my life greatly.
I’ve also cobbled together a workable alternative to notepad that I use as a diary of sorts. Never would’ve figured that out alone either.
As I see it at least whatever AI assistant you use at least doesn’t give one the gatekeeping or abuse one gets if they ask a relatively simple question somewhere else. Kinda like this, I guess.
TL;DR: In some situations our current 'AI’s can be helpful.
Expand that into 10k line custom programs and you’ll begin having nightmarish issues.
That might be the underlying problem. Software project management around small projects is easy. Anything that has a basic text editor and a Python interpreter will do. We have all these fancy tools because shit gets complicated. Hell, I don’t even like writing 100 lines without git.
A bunch of non-programmers make a few basic apps with ChatGPT and think we’re all cooked.
No doubt, I was merely suggesting that throwing math problems might not have been the intended use for what is essentially a language interpreter, obviously depending on the in question.