Miguel Zabala • over 1 year ago
Any restriction about use external API's?
Hi, I've been carefully reading through the rules and understand that one of the key requirements is to deploy using Nvidia Workbench AI. However, there doesn't seem to be any specific requirements regarding the use of particular LLM models. Can we use models like GPT4o-mini, or does it have to be something local or running directly from Nvidia's infrastructure? Thank you very much
Comments are closed.

3 comments
tyler whitehouse Manager • over 1 year ago
Miguel
The point is to use AI Workbench to develop AI Workbench projects/applications that can work directly with GPUs.
However, if you look at the two example projects (https://github.com/NVIDIA/workbench-example-agentic-rag and https://github.com/NVIDIA/workbench-example-hybrid-rag), you can see they are setup to use APIs, GPUs or even containerized models.
In addition, you can see the NIM Anywhere project (https://github.com/NVIDIA/nim-anywhere) which has a more complicated, but probably flexible, approach.
Does this help answer your question?
Tyler
tyler whitehouse Manager • over 1 year ago
Another thing is that we really want projects/applications that can run on "non-data-center" GPUs, i.e. the class of GPUs that run in either laptops or desktops. That's what I'm excited about.
So I think you can use an API to prove out part of a concept, but you want to make sure that the end result isn't dependent on the API.
Does this make sense?
Miguel Zabala • over 1 year ago
Hi Tyler,
Thank you for the clarification and for sharing the examples. This helps me better understand the desired approach and how to work with GPUs outside of data center environments.
Thanks again!
Best regards,
Miguel