@@ -45,12 +45,12 @@ This section explains the structure of the frontend and provides instructions on
45
45
start_services.sh
46
46
```
47
47
48
- The frontend is organized within the ` frontend/ ` directory. Key files and folders include:
48
+ The frontend is organized within the ` frontend/ ` directory. Key files and folders include:
49
49
50
50
- ` config/ ` → Contains the ** participant.json.example** , the default structure for the paramteres passed to each participant.
51
- - ` databases/ ` → Contains the different databases for NEBULA
52
- - ` static/ ` → Holds static assets (CSS, images, JS, etc.).
53
- - ` templates/ ` → Contains HTML templates. Focus on ** deployment.html**
51
+ - ` databases/ ` → Contains the different databases for NEBULA
52
+ - ` static/ ` → Holds static assets (CSS, images, JS, etc.).
53
+ - ` templates/ ` → Contains HTML templates. Focus on ** deployment.html**
54
54
55
55
### ** Adding a New Parameter**
56
56
@@ -119,7 +119,7 @@ To implement a new attack type, first locate the section where attacks are defin
119
119
</h5 >
120
120
<div class =" form-check form-check-inline " style =" display : none ;" id =" new-parameter-container " >
121
121
<input type="number" class="form-control" id="new-parameter-value"
122
- placeholder="new parameter value" min="0" value="0">
122
+ placeholder="new parameter value" min="0" value="0">
123
123
</div >
124
124
</div >
125
125
</div >
@@ -204,43 +204,43 @@ To view the documentation of functions in more detail, you must go to the **NEBU
204
204
utils.py
205
205
```
206
206
207
- The backend is organized within the ` /nebula/ ` directory. Key files and folders include:
207
+ The backend is organized within the ` /nebula/ ` directory. Key files and folders include:
208
208
209
209
** Addons/**
210
210
211
211
The ` addons/ ` directory contains extended functionalities that can be integrated into the core system.
212
212
213
- - ** ` attacks/ ` ** → Simulates attacks, primarily for security purposes, including adversarial attacks in machine learning.
214
- - ** ` blockchain/ ` ** → Integrates blockchain technology, potentially for decentralized storage or security enhancements.
215
- - ** ` trustworthiness/ ` ** → Evaluates the trustworthiness and reliability of participants, focusing on security and ethical considerations.
216
- - ** ` waf/ ` ** → Implements a Web Application Firewall (WAF) to filter and monitor HTTP traffic for potential threats.
213
+ - ** ` attacks/ ` ** → Simulates attacks, primarily for security purposes, including adversarial attacks in machine learning.
214
+ - ** ` blockchain/ ` ** → Integrates blockchain technology, potentially for decentralized storage or security enhancements.
215
+ - ** ` trustworthiness/ ` ** → Evaluates the trustworthiness and reliability of participants, focusing on security and ethical considerations.
216
+ - ** ` waf/ ` ** → Implements a Web Application Firewall (WAF) to filter and monitor HTTP traffic for potential threats.
217
217
218
- ** Core/**
218
+ ** Core/**
219
219
220
220
The ` core/ ` directory contains the essential components for the backend operation.
221
221
222
- - ** ` aggregation/ ` ** → Manages the aggregation of data from different nodes.
223
- - ** ` datasets/ ` ** → Handles dataset management, including loading and preprocessing data.
224
- - ** ` models/ ` ** → Defines machine learning model architectures and related functionalities, such as training and evaluation.
225
- - ** ` network/ ` ** → Manages communication between participants in a distributed system.
226
- - ** ` pb/ ` ** → Implements Protocol Buffers (PB) for efficient data serialization and communication.
227
- - ** ` training/ ` ** → Contains the logic for model training, optimization, and evaluation.
228
- - ** ` utils/ ` ** → Provides utility functions for file handling, logging, and common tasks.
222
+ - ** ` aggregation/ ` ** → Manages the aggregation of data from different nodes.
223
+ - ** ` datasets/ ` ** → Handles dataset management, including loading and preprocessing data.
224
+ - ** ` models/ ` ** → Defines machine learning model architectures and related functionalities, such as training and evaluation.
225
+ - ** ` network/ ` ** → Manages communication between participants in a distributed system.
226
+ - ** ` pb/ ` ** → Implements Protocol Buffers (PB) for efficient data serialization and communication.
227
+ - ** ` training/ ` ** → Contains the logic for model training, optimization, and evaluation.
228
+ - ** ` utils/ ` ** → Provides utility functions for file handling, logging, and common tasks.
229
229
230
- ** Files**
230
+ ** Files**
231
231
232
- - ** ` engine.py ` ** → The main engine orchestrating participant communications, training, and overall behavior.
233
- - ** ` eventmanager.py ` ** → Handles event management, logging, and notifications within the system.
234
- - ** ` role.py ` ** → Defines participant roles and their interactions.
232
+ - ** ` engine.py ` ** → The main engine orchestrating participant communications, training, and overall behavior.
233
+ - ** ` eventmanager.py ` ** → Handles event management, logging, and notifications within the system.
234
+ - ** ` role.py ` ** → Defines participant roles and their interactions.
235
235
236
- ** Standalone Scripts**
236
+ ** Standalone Scripts**
237
237
238
238
These scripts act as entry points or controllers for various backend functionalities.
239
239
240
- - ** ` controller.py ` ** → Manages the flow of operations, coordinating tasks and interactions.
241
- - ** ` participant.py ` ** → Represents a participant in the decentralized network, handling computations and communication.
242
- - ** ` scenarios.py ` ** → Defines different simulation scenarios for testing and running participants under specific conditions.
243
- - ** ` utils.py ` ** → Contains helper functions that simplify development and maintenance.
240
+ - ** ` controller.py ` ** → Manages the flow of operations, coordinating tasks and interactions.
241
+ - ** ` participant.py ` ** → Represents a participant in the decentralized network, handling computations and communication.
242
+ - ** ` scenarios.py ` ** → Defines different simulation scenarios for testing and running participants under specific conditions.
243
+ - ** ` utils.py ` ** → Contains helper functions that simplify development and maintenance.
244
244
245
245
246
246
### ** Adding new Datasets**
@@ -371,7 +371,7 @@ If you want to import a dataset, you must first create a folder named **data** w
371
371
# self._load_data(self.path_to_data)
372
372
373
373
mode = "train" if self.is_train else "test"
374
- self.image_list = glob.glob(os.path.join(self.path_to_data, f"{self.name}/{mode}/*/*.npy"))
374
+ self.image_list = glob.glob(os.path.join(self.path_to_data, f"{self.name}/{mode}/*/*.npy"))
375
375
self.label_list = glob.glob(os.path.join(self.path_to_data, f"{self.name}/{mode}/*/*.json"))
376
376
self.image_list = sorted(self.image_list, key=os.path.basename)
377
377
self.label_list = sorted(self.label_list, key=os.path.basename)
@@ -424,7 +424,7 @@ Then you must create a **MilitarySARDataset** class in order to use it, as shown
424
424
425
425
#### Define transforms
426
426
427
- You can apply transformations like cropping and normalization using ` torchvision.transforms ` .
427
+ You can apply transformations like cropping and normalization using ` torchvision.transforms ` .
428
428
429
429
For example, the ** MilitarySAR** dataset uses ** RandomCrop** for training and ** CenterCrop** for testing.
430
430
@@ -483,7 +483,7 @@ For example, the **MilitarySAR** dataset uses **RandomCrop** for training and **
483
483
apply_transforms = [CenterCrop(88), transforms.ToTensor()]
484
484
if train:
485
485
apply_transforms = [RandomCrop(88), transforms.ToTensor()]
486
-
486
+
487
487
return MilitarySAR(name="soc", is_train=train, transform=transforms.Compose(apply_transforms))
488
488
```
489
489
@@ -816,4 +816,4 @@ The new aggregator must inherit from the **Aggregator** class. You can use **Fed
816
816
817
817
# self.print_model_size(accum)
818
818
return accum
819
- ```
819
+ ```
0 commit comments