Recent strides in machine learning have propelled numerous breakthroughs in various industries, capturing the imagination of researchers and enthusiasts alike. While traditional machine learning methodologies necessitate the collection of vast datasets for centralized model training, the advent of distributed machine learning has opened new frontiers for addressing large-scale learning challenges.

One pioneering approach in this landscape is federated learning, a distributed paradigm that disrupts the conventional link between data collection and model training. Employing multi-party computation and model aggregation, federated learning mitigates privacy concerns by keeping data localized. This transformative approach gains further traction in the era of deep neural networks, evolving into what is now known as federated deep learning.

At the forefront of this domain is a project that delves into the nuances of federated averaging and its variants, specifically tailored for various text mining tasks. The driving force behind this endeavor is not only the quest for better model fusion but also a commitment to enhancing communication efficiency and personalization.

The research project explores the intricacies of federated learning through a series of publications that shed light on the latest developments:

  1. Emerging Trends in Federated Learning: From Model Fusion to Federated X Learning
    This comprehensive review surveys the evolving landscape of federated learning, from foundational model fusion to the cutting-edge Federated X Learning.

  2. Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning
    A paper introduces innovative techniques such as dynamic sampling and selective masking to enhance communication efficiency in federated learning.

  3. Decentralized Knowledge Acquisition for Mobile Internet Applications
    A paper explores decentralized knowledge acquisition, a crucial aspect in the realm of mobile internet applications.

  4. Learning Private Neural Language Modeling with Attentive Aggregation
    This paper delves into the intricacies of learning private neural language modeling, incorporating attentive aggregation for heightened model fusion.

  5. A PyTorch Implementation of Federated Learning
    As a testament to the commitment to open-source collaboration, providing a PyTorch implementation of federated learning, available on Zenodo [Link].

As the world witnesses the fusion of federated learning and deep neural networks, the project under discussion stands as a beacon in the exploration of federated deep learning’s potential. With a keen focus on addressing privacy concerns in real-world applications and effective model fusion of neural networks, the research not only advances the field but also fosters a commitment to transparency through open-source implementations.



Publications
[1] Emerging Trends in Federated Learning: From Model Fusion to Federated X Learning. Shaoxiong Ji, Teemu Saravirta, Shirui Pan, Guodong Long, and Anwar Walid. arXiv preprint arXiv:2102.12920, 2021.
[2] Dynamic Sampling and Selective Masking for Communication-Efficient Federated Learning. Shaoxiong Ji, Wenqi Jiang, Anwar Walid, and Xue Li. IEEE Intelligent Systems, 2021.
[3] Decentralized Knowledge Acquisition for Mobile Internet Applications. Jing Jiang, Shaoxiong Ji, and Guodong Long. World Wide Web, 2020.
[4] Learning Private Neural Language Modeling with Attentive Aggregation. Shaoxiong Ji, Shirui Pan, Guodong Long, Xue Li, Jing Jiang, and Zi Huang. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), 2019.
[5] Shaoxiong Ji. (2018). A PyTorch Implementation of Federated Learning. Zenodo.



Feature Photo by dylan nolte on Unsplash