Profile Picture

Shivogo John

Machine Learning Engineer & Researcher

Back to All Projects

🧠 Federated Learning with Blockchain & IPFS

πŸ“Œ Project Overview

This project integrates Federated Learning (FL) with Blockchain and IPFS to create a decentralized, auditable, and transparent AI training ecosystem. The system ensures that local data from different clients (e.g., stores, hospitals, IoT devices) remains private, while model updates are aggregated securely on a central node. Each aggregation round is recorded on the Ethereum blockchain for transparency, and the aggregated global model is stored on IPFS for decentralized access.

βš™οΈ System Architecture

System Diagram

1. Clients (Local Nodes)

2. Central Server (Aggregator Node)

3. IPFS Storage

4. Ethereum Blockchain Logging

5. Dashboard & Visualization

πŸ“‚ Data Flow Example

  1. Client A (Store A) submits input data to its prediction system.
  2. Local model predicts output and updates its parameters.
  3. Model weights are sent to the central aggregator (not raw data).
  4. Aggregator combines weights from all clients β†’ generates a global model.
  5. Global model stored on IPFS + metadata logged to blockchain.
  6. Dashboard updates global accuracy & blockchain records.
  7. Improved model sent back to clients for further training β†’ cycle continues.

πŸ› οΈ Frameworks & Tools Used

πŸ”— Useful Links

Contact & Code Access

I am happy to discuss this project, answer any questions, or provide access to the source code for the client-server and blockchain components upon request. Please feel free to reach out to me.

Contact Me

πŸ“– Documentation

🌟 Key Contributions of This System

  1. Data Privacy – Local training ensures raw data never leaves the client.
  2. Transparency – Blockchain records provide immutable proof of training rounds.
  3. Integrity – Models stored on IPFS with unique hashes for verification.
  4. Scalability – Supports multiple clients/nodes with lightweight devices.
  5. Improved Accuracy – Collaborative training boosts overall model performance.

Practical notes, risks and recommended practices

Quick linear summary

  1. Clients train locally and produce weight updates.
  2. Clients send updates to the central aggregator.
  3. Aggregator performs FedAvg to create the global model.
  4. Aggregated model is saved to disk.
  5. Saved model is uploaded to IPFS β†’ returns ipfs_hash.
  6. Aggregation metadata + ipfs_hash is recorded on the blockchain β†’ returns block_tx.
  7. Ledger is updated and dashboard published.
  8. Clients fetch or receive the new global model β†’ next round begins.