Prevent cycles in the system where users provide details/criteria for results they're seeking(archetypes of people to connect with).
Non malicious actor(someone prompting nonsense to burn tokens, ex. "cat", "tiger", "lion" and expecting a list of pertinent users/personas to their search).
Define a heuristic/s on whether or not to exit the loop programmatically. For example, has the user provided a role title, location, industry. If so we'd evaluate the prompt/details higher so can more confidently move on.
3 / 5 "important" labels filled out correctly.
60% / D -> Passing, although it could be better for more refined results.
We could prompt the user to enter this information if it's not already provided, but I'm assuming that this isn't an option(given the nature of the question, we're in a 'cycle').
Subsequently I think it might be a good idea to white list a series of follow up prompts based on previous ascertained facts(prompt engineering).
Ex 1: "Give me software engineers" we subsequently prompt.
- What frameworks are desired?
- What number of years do you require?
Ex 2: "Give me finance professionals"
- What certifications are you looking for? Series 65... Series 57...?
- What specialization? Corporate finance? Capital Markets?
Although this isn't ideal, we'd want to avoid this work, I think this is where the rubber hits the road and we appreciate that AI does in fact have "a bunch of conditionals under the hood".
In a perfect world we wouldn't have to do this but IMO I do think this is a feasible solution given the problem space. At the end of the day there are a finite number of personas which we'd build up over time which in sum would eliminate the possibility of a cycle(once again, assuming non-malicious actor). I would say it's not unlike i18n where we map out the translations to each piece of text in JSON. Our AI responses is quasi "hand made".
I think this is an online algorithm, something that is a "living" and constantly evolving. We'd maintain a table/list of prompts/checkpoints where either a) the system alerted us that the user didn't progress further downstream or b) the user flagged the results as inadequate.
This solution would also help us to identify common deadends
Ex 1: "List drug dealers in NYC"
- Typically these archetypes don't want to be found, so online results would not be so easy to find.
Ex 2: "Give me BTC millionaires"
- Many people don't like revealing their wealth so this too would likely be difficult to ascertain. I know there's cavets such as people with public wallets and/or C-LEVEL executives working at fortune 500 companies(public disclosure) but the idea here is that we'd want to identify common dead ends and build up this list from there in order to a) contemplate product questions b) build white list c) build additional tools/retreivers/tables for the data which we can crawl.
In that way, I think we can easily evolve as job titles/searches/archetypes change in subsequent months/years.
One more point, this assumes we're not interested in human in the loop solutions. For example, we could alert CSR to intervene if the prompt history reaches 10(arbitrary) and the agent is still unable to exit the loop programatically(generate sufficient results given criteria provided by the user).
I hope this doesn't sound completely out of park in terms of potential achievable solutions. Without having example cycles to review it's hard to say what a more elegant solution might be.
Hope that helps.