Why build using generative AI tools on Injective?
With generative AI coding tools, you can build applications very quickly, including on Injective. However building very fast in the wrong direction is not ideal. You will find skills, agents, workflows, and MCP servers here that will help you with effective AI software engineering.What types of generative AI tools are available?
- LLMs - Large language models (LLMs) are the base-layer technology powering almost all generative AI software engineering. Almost all AI development tools are wrappers around LLMs. Popular ones include Claude Opus (Anthropic), Gemini (Google), and Kimi (Moonshot AI).
- LLM Providers -
Low- and mid-tier LLMs are possible to run on retail/ consumer hardware.
However the top-tier LLMs need to be accessed remotely.
You have 3 main options:
- Local providers - For example, using LM Studio or Ollama.
- Remote providers from model developers - For example, accessing Claude Opus via an Anthropic subscription.
- Remote providers from model aggregators - For example, accessing Claude Opus, Gemini, or Kimi via an OpenRouter subscription.
- Tools - These can be anything from functions, to scripts, to command-line interfaces (CLIs) that are packaged up in a manner that make them understood or callable by LLMs. For example, if you want the LLM to access real-time information, i.e. information that was no available to it when the LLM was trained, you would need to give it access to call tools for web searches or other data APIs.
- MCP - Model-Context-Protocol (MCP) is a protocol designed for discovery and calling of tools by LLMs. They are designed to standardise the way for different LLMs and LLM providers to invoke tools. Previously each LLM or LLM provider had competing standards/ protocols for doing so.
- Skills, workflows, agents - These are markdown files that optionally reference supporting resources, tools, MCP servers and others. They are designed specifically to work with AI engineering harnesses (but can be used in other contexts). They can be recursive, for example a skill can reference other skills. Likewise workflows are usually a set of skills with a defined order; and agents are are sets of workflows and skills. Note that the term “agents” is overused, with multiple definitions, so the above does not apply in other contexts.
- AI engineering IDEs - These are either dedicated IDEs or plugins within IDEs that allow you to prompt LLMs, including execution of tool calls or MCPs, and use their output to work on the code base that is open within the IDE. Popular ones include: Roo, Cline, and Cursor.
- AI engineering Harnesses - These are command line interfaces (CLIs) or terminal user interfaces (TUIs) that are designed around invoking LLMs for coding tasks. The operate directly on the file system, and often come with baked-in optimisations and utilities for engineering tasks. These tend to be more powerful than working with AI engineering IDEs, as they work best when skills, workflows, and agents are used. Popular ones include: Claude Code (Anthropic), Codex (OpenAI), and OpenCode (unaffiliated).
- AI engineering Orchestrators - These are tools that act as wrappers around harnesses. Their main intent is to enable long-running loops or parallelisation of harness invocation, such that it becomes possible have LLMs working autonomously on longer and more complex tasks without the need for constant human supervision. Popular ones include: Ralph, GSD.
