Anthropic’s prompt suggestions are simple, but you can’t give an LLM an open-ended question like that and expect the results you want! You, the user, are likely subconsciously picky, and there are always functional requirements that the agent won’t magically apply because it cannot read minds and behaves as a literal genie. My approach to prompting is to write the potentially-very-large individual prompt in its own Markdown file (which can be tracked in git), then tag the agent with that prompt and tell it to implement that Markdown file. Once the work is completed and manually reviewed, I manually commit the work to git, with the message referencing the specific prompt file so I have good internal tracking.
Глава МИД Ирана дал прогноз по «плану Б» Трампа20:56
,更多细节参见电影
Фото: Иван Водопьянов / Коммерсантъ
第二百五十条 订立合同时,被保险人已经知道或者应当知道保险标的已经因发生海上保险事故而遭受损失的,保险人不承担赔偿责任,但是有权收取保险费;保险人已经知道或者应当知道保险标的已经不可能因发生海上保险事故而遭受损失的,被保险人有权收回已经支付的保险费。
You can still reference dom.iterable and dom.asynciterable in your configuration file’s "lib" array, but they are now just empty files.