We present our biweekly technical development update of our Decentralized AI Platform, highlighting the significant progress made across core components.
This period has seen noteworthy advancements, bringing us closer to our shared vision: a full-scale modernized Decentralized AI Platform:
Sandbox
Frontend
Fixed adaptivity issues in the PreviewArea tab;
Added .env subsystem for managing application environment variables;
Implemented a URL revocation system to remove dependencies on previously compiled Previews;
Introduced file stories in the Code Editor, allowing users to retain editing positions and track file change history for each project file;
Resolved an issue with system message timeouts where messages were not being deleted;
Applied thin scrollbar styling;
Made minor CSS adjustments.
Backend
Enabled sourcing of CORS lists from environment variables;
Integrated .env subsystem for handling application environment variables;
Added input validation for metadata in the ServicePreviewHeader form to ensure the service endpoint begins with https://, otherwise the service will not load;
Marketplace
Frontend
Introduced the “Data Presets” (Fine-tuning Service) tab on the service page, providing dataset management functionality (refer to attached video):
Analyzing and displaying dataset statistics (size, format, rate, issues, Word Frequencies graph) with support for multiple graph types: bar, pie, and treemap;
Tools for resolving detected issues and merging datasets to extend data, allowing users to select example datasets or their recent files;
Capability to improve datasets and use them for training models;
Updated the file upload component;
Enhanced scrollbar visuals.
Backend
Implemented field validation checks for the notification service;
Fixed a bug in the service status service;
Created API documentation for the notification service;
Developed the core functionality of a dataset preprocessing service for the Fine-tuning Service;
Integrated the dataset preprocessing service with S3 storage and implemented procedures for initial data sanitization and validation;
Added functionality for detecting and correcting common textual data imperfections within the dataset preprocessing service for fine-tuning language models;
Built a module in the dataset preprocessing service to calculate statistics on identified data issues;
Enhanced the dataset preprocessing service with dataset merging and preliminary validation features;
Prepared showcase datasets for textual generative and interactive configurations of the Fine-tuning Service.
JS SDK
Getting ready for release:
Fixed an issue with service client creation related to calling a service with a predefined payment strategy;
Separated the general part of the Node.js SDK into a standalone package, snet-sdk-core, for faster updates and improvements.
The structure of the web SDK has been restructured. It now includes:
Account (for interacting with the user’s Metamask account);
WebServiceClient (for invoking service functions;
ServiceMetadataProviderWeb (new module, extracted from the Service Client module to resolve errors with predefined payment strategies, enabling calls using only Payment or Free-call strategies);
Payment strategies partially moved to snet-sdk-core, eliminating redundant code between snet-sdk-web and snet-sdk-node. This also resolved errors when invoking services using the Free-call payment strategy;
TrainingProviderWeb (new module, managing create, update, delete, and status retrieval for training models and fetching existing models).
The example has been updated, allowing users to explore the full functionality of snet-sdk-web.
After updating snet-sdk-node, these changes will be published in new versions of:
snet-sdk-core;
snet-sdk-web;
snet-sdk-node.
Python SDK
Completed major work for Training v2 integration;
Initiated preparation for Training v2 testing;
Prototyped, improved, and covered the following modules with unit tests:
config.py, concurrency_manager.py, and ipfs_utils.py (100% coverage);
account.py, client_lib_generator.py, and utils.py (80% coverage).
CLI
Conducted testing for private key and mnemonic encryption;
Started developing functional tests;
Created a video guide for Filecoin usage (to be published soon).
Daemon
Training v2 core development:
Added parsing of service provider proto files using AST to identify methods related to training and dataset requirements specified via proto options;
Implemented proxying of new methods from the daemon to the service provider, storing intermediate information in etcd, and verifying access by addresses;
Redesigned the model storage architecture;
Conducted redesign and analysis of service proto file requirements.
Welcome to the November edition of our Ecosystem Roundup, where we present the latest advancements and milestones across our decentralized AI Platform...
Introduction We present our biweekly technical development update of our Decentralized AI Platform, highlighting the significant progress made across core components. This...
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the webiste for you. Cookie & Privacy Policy