Ask HN: How do you prompt the "advanced" models
18 points ·
_jogicodes_
·
With the apparently more advanced reasoning models I thought that that would change. In Windsurf I have DeepSeek R1 as well as o3-mini available. I had thought that they would improve my outcomes to the prompts that I'm giving. They did not, far from it. Even though in benchmarks they consistently pull ahead of Claude 3.5 Sonnet, in reality, with the way I am prompting, Claude almost always comes up with the better solution. So much so, that I can't remember a time where Claude couldn't figure it out and then switching to another model fixed it for me.
Because of the discrepancy between benchmarks and my own experience I am wondering if my prompting is off. It may be that I am prompting Claude specific having used it for a while now. Is there a trick to know to prompt the reasoning models "properly"?
cruffle_duffle ·12 days ago
One thing I always make sure is to never get it to just spit out code. I always go back and forth a few times to ensure alignment before I say “Bombs Away” and let it write code.
Show replies
almosthere ·12 days ago
I suspect we'll be getting to a point where the "code" is just instructions, codified in a special markup file, and llms just write the worst possible, kiss code you can think of - but is extremely secure because it's just like direct database access with all the security constraints you define, but always applied correctly. In other words think of the actual code as a non-committed artifact, and it's just emitted if the descriptors change.
The long term of llms writing code isn't to give us human quality code, it's to give us what we'd think of assembly but rigorously output to all auth requirements.
Show replies
ai-christianson ·12 days ago
Show replies
danbmil99 ·12 days ago
KTibow ·12 days ago