You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
LLMEvaluator currently only supports OpenAI, it would be nice if we could use it with the OpenAI models via Azure too.
Describe the solution you'd like
I'd like to use evaluators with Azure OpenAI (e.g. ContextRelevanceEvaluator(api='azure-openai'))
In addition, I propose to slightly change the design of LLMEvaluator to allow more flexibility.
Currently the params api_key=Secret.from_env_var("OPENAI_API_KEY") forces the user to provide an env var that is specific to OpenAI, and would not be used by other generators.
This wouldn't force the user to provide to LLMEvaluator anything specific to the generator. It gives the flexibility to pass anything that the generator can take (e.g. api keys, api version, or azure_deployment in case of Azure) via the generator_kwargs.
At the same time, if the user doesn't pass anything, the generator would still look for its required env vars during instantiation.
I guess api_key needs to enter the deprecation cycle before being removed. Maybe we could just change to api_key=Secret.from_env_var("OPENAI_API_KEY", strict=False) until deprecated, so that that var will not be required for other generators.
Describe alternatives you've considered
Subclassing the LLMEvaluator (and all the child classes ) into a custom component
Additional context
Happy to hear your thoughts, also in case there are other better solutions I didn't consider. :)
I'm currently a bit busy with other things, but I may be able to raise PR with the proposal in the next days.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
LLMEvaluator
currently only supportsOpenAI
, it would be nice if we could use it with the OpenAI models via Azure too.Describe the solution you'd like
I'd like to use evaluators with Azure OpenAI (e.g.
ContextRelevanceEvaluator(api='azure-openai')
)In addition, I propose to slightly change the design of
LLMEvaluator
to allow more flexibility.Currently the params
api_key=Secret.from_env_var("OPENAI_API_KEY")
forces the user to provide an env var that is specific to OpenAI, and would not be used by other generators.What about having something like:
?
This wouldn't force the user to provide to
LLMEvaluator
anything specific to the generator. It gives the flexibility to pass anything that the generator can take (e.g. api keys, api version, orazure_deployment
in case of Azure) via thegenerator_kwargs
.At the same time, if the user doesn't pass anything, the generator would still look for its required env vars during instantiation.
I guess
api_key
needs to enter the deprecation cycle before being removed. Maybe we could just change toapi_key=Secret.from_env_var("OPENAI_API_KEY", strict=False)
until deprecated, so that that var will not be required for other generators.Describe alternatives you've considered
Subclassing the
LLMEvaluator
(and all the child classes ) into a custom componentAdditional context
Happy to hear your thoughts, also in case there are other better solutions I didn't consider. :)
I'm currently a bit busy with other things, but I may be able to raise PR with the proposal in the next days.
The text was updated successfully, but these errors were encountered: