App Node Clients
App node clients is a core component of the AXES network that enables AI application developers to connect their applications to the network. It acts as a bridge between the decentralized AXES protocol and the publisher's models. Each application developer needs to run their own app node client to participate in the network and enable their application to interact with users and other applications.
Key Functionalities
Smart Contract Interaction: Each App Node Client is associated with a unique smart contract on the AXES network. This allows the client to interact with the blockchain, handle prompt requests, submit responses, and manage application-specific logic and transactions.
Prompt Handling: When a user sends a prompt request through the AXES search bar, the Discovery Engine matches it with the most suitable App Node Client. The client receives the encrypted prompt, decrypts it using its private key, processes the request, and sends back an encrypted response to the user.
Infrastructure Integration: App Node Clients can be deployed on various infrastructures chosen by the publisher, such as AWS, GCP, IBM, or decentralized solutions like DePin networks. The client provides an interface to connect the application's infrastructure to the AXES network seamlessly.
gRPC and REST API Support: Publishers can configure their App Node Clients to expose gRPC and REST API endpoints. This allows efficient communication between the client and the application, enabling features like streaming data, real-time interactions, and complex service calls.
Canvas GUI Integration: App Node Clients can leverage the
axes.connect.canvas()
method to create interactive user interfaces within the AXES platform. The canvas object provides a web browser-like environment where publishers can build custom user experiences and interactive elements for their AI applications.
Network Operated Models (NOM)
Network Operated Models refer to AI models that are maintained and hosted by the AXES network itself. These models are designed to provide basic AI functionalities and serve as a foundation for the ecosystem. Key characteristics of NOMs include:
Managed by AXES: The AXES network team is responsible for developing, updating, and optimizing these models to ensure their reliability and performance.
Open Access: NOMs are accessible to all users and publishers on the AXES network, providing a base level of AI capabilities that can be leveraged by various applications.
Baseline Functionalities: NOMs typically offer fundamental AI functionalities such as text generation, image recognition, and basic natural language processing tasks. They serve as building blocks for more advanced and specialized AI applications.
Network Resources: The computational resources required to run NOMs are provided by the AXES network infrastructure, ensuring scalability and availability.
Publishers can integrate NOMs into their applications by making API calls to the designated endpoints provided by the AXES network. This allows them to utilize the pre-trained models and offload the computational overhead to the network.
Publisher Operated Models (POM)
Publisher Operated Models refer to AI models that are developed, hosted, and maintained by individual publishers on the AXES network. POMs enable publishers to create specialized and advanced AI applications tailored to specific domains or use cases. Key characteristics of POMs include:
Publisher Ownership: POMs are fully owned and controlled by the publishers who develop them. Publishers have the flexibility to design, train, and deploy their models according to their specific requirements.
Customization: Publishers can customize POMs to address specific industry verticals, niche domains, or unique user needs. This allows for the creation of highly specialized AI applications.
Infrastructure Flexibility: Publishers have the freedom to choose the infrastructure on which they host their POMs. They can utilize cloud platforms like AWS or GCP, or opt for decentralized solutions such as DePin networks.
Monetization: Publishers can monetize their POMs by setting usage fees or subscription models. They have control over the pricing and revenue generation aspects of their AI applications.
Intellectual Property: POMs are considered the intellectual property of the publishers who create them. The AXES network provides mechanisms to protect and enforce IP rights associated with POMs.
To deploy a POM, publishers need to set up an App Node Client that connects their AI application to the AXES network. They can then configure the client to handle prompt requests, process transactions, and provide access to their specialized AI functionalities
Setting Up an App Node Client
To set up an App Node Client, follow these steps:
Obtain a Publisher Account: Create a smart account on the AXES network and mint a Publisher NFT to upgrade your account to a publisher role.
Prepare Application Files: Organize your AI application files, dependencies, and configurations in a suitable format for deployment.
Configure Deployment Settings: Set up the necessary deployment configurations, such as choosing the infrastructure provider (e.g., AWS, GCP, custom), specifying resource requirements, and defining scaling policies.
Build and Deploy: Initiate the build process for your App Node Client. This typically involves creating a Docker image, passing automated tests, and deploying the client to the selected infrastructure.
Set Entry Points and Permissions: Define the entry points for your App Node Client, such as gRPC and REST API endpoints, and configure the appropriate permissions and access controls.
Connect to AXES Network: Use the provided CLI or SDK to connect your App Node Client to the AXES network. This involves associating the client with your publisher account and registering it with the Discovery Engine.
Manage and Monitor: Utilize the AXES publisher dashboard to manage your App Node Client, monitor its performance, and access advanced settings for customization and optimization.
For more detailed information and code examples, refer to the AXES documentation and SDK available at:
Last updated