[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for AzureOpenAI in LLMEvaluator #7946

Open
EdoardoAbatiTR opened this issue Jun 27, 2024 · 1 comment
Open

Add support for AzureOpenAI in LLMEvaluator #7946

EdoardoAbatiTR opened this issue Jun 27, 2024 · 1 comment
Labels
topic:eval type:feature New feature or request

Comments

@EdoardoAbatiTR
Copy link

Is your feature request related to a problem? Please describe.

LLMEvaluator currently only supports OpenAI, it would be nice if we could use it with the OpenAI models via Azure too.

Describe the solution you'd like

I'd like to use evaluators with Azure OpenAI (e.g. ContextRelevanceEvaluator(api='azure-openai'))

In addition, I propose to slightly change the design of LLMEvaluator to allow more flexibility.
Currently the params api_key=Secret.from_env_var("OPENAI_API_KEY") forces the user to provide an env var that is specific to OpenAI, and would not be used by other generators.

What about having something like:

@component
class LLMEvaluator:
    def __init__(
        self,
        instructions: str,
        ...
        api: str = "openai",
        generator_kwargs: Dict[str, Any] = ..., # instead of api_key
    ):
        ...
        self.generator = OpenAIGenerator(**generator_kwargs)

?

This wouldn't force the user to provide to LLMEvaluator anything specific to the generator. It gives the flexibility to pass anything that the generator can take (e.g. api keys, api version, or azure_deployment in case of Azure) via the generator_kwargs.
At the same time, if the user doesn't pass anything, the generator would still look for its required env vars during instantiation.

I guess api_key needs to enter the deprecation cycle before being removed. Maybe we could just change to api_key=Secret.from_env_var("OPENAI_API_KEY", strict=False) until deprecated, so that that var will not be required for other generators.

Describe alternatives you've considered

Subclassing the LLMEvaluator (and all the child classes ) into a custom component

Additional context

Happy to hear your thoughts, also in case there are other better solutions I didn't consider. :)

I'm currently a bit busy with other things, but I may be able to raise PR with the proposal in the next days.

@anakin87 anakin87 added type:feature New feature or request topic:eval labels Jun 27, 2024
@lbux
Copy link
Contributor
lbux commented Jun 27, 2024

I solved this in my PR for local evaluation support but decided to not proceed with the PR: #7745

You can take what I built, strip the llama.cpp bits, and keep the generation_kwargs sections.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic:eval type:feature New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants