Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance slips.py documentation #498

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 41 additions & 34 deletions docs/code_documentation.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,48 @@
# Code documentation

### How Slips Works
## How Slips Works

<img src="https://raw.githubusercontent.com/stratosphereips/StratosphereLinuxIPS/develop/docs/images/slips_workflow.png" title="Slips Workflow">

1. slips.py is the entry point, it's responsible for starting all modules, and keeping slips up until the analysis is finished.
2. slips.py starts the input process, which is the one responsible for reading the flows from the files given to slips using -f
it detects the type of file, reads it and passes the flows to the profiler process. if slips was given a PCAP or is running on an interface
, the input process starts a zeek thread that analyzes the pcap/interface using slips' own zeek configuration and sends the generated zeek
flows to the profiler process.
3. slips.py also starts the update manager, it updates slips local TI files, like the ones stored in slips_files/organizations_info and slips_files/ports_info.
later, when slips is starting all the modules, slips also starts the update manager but to update remote TI files in the background in this case.
4. Once the profiler process receives the flows read by the input process, it starts to convert them to a structure that slips can deal with.
it creates profiles and time windows for each IP it encounters.
5. Profiler process gives each flow to the appropriate module to deal with it. for example flows from http.log will be sent to http_analyzer.py
to analyze them.
6. Profiler process stores the flows, profiles, etc. in slips databases for later processing. the info stored in the dbs will be used by all modules later.
Slips has 2 databases, Redis and SQLite. it uses the sqlite db to store all the flows read and labeled. and uses redis for all other operations. the sqlite db is
created in the output directory, meanwhite the redis database is in-memory.
7-8. using the flows stored in the db in step 6 and with the help of the timeline module, slips puts the given flows in a human-readable form which is
then used by the web UI and kalipso UI.
9. when a module finds a detection, it sends the detection to the evidence process to deal with it (step 10) but first, this evidence is checked by the whitelist to see if it's
whitelisted in our config/whitelist.conf or not. if the evidence is whitelisted, it will be discarded and won't go through the next steps
10. now that we're sure that the evidence isn't whitelisted, the evidence process logs it to slips log files and gives the evidence to all modules responsible for exporting
evidence. so, if CEST, Exporting modules, or CYST is enabled, the evidence process notifies them
through redis channels that it found an evidence and it's time to share the evidence.
11. if the blocking module is enabled using -p, the evidence process shares all detected alerts to the blocking module. and the blocking module handles
the blocking of the attacker IP through the linux firewall (supported in linux only)
12. if p2p is enabled in config/slips.conf, the p2p module shares the IP of the attacker, its' score and blocking requests sent by the evidence process
with other peers in the network so they can block the attackers before they reach them.
13. The output process is slips custom logging framework. all alerts, warnings and info printed are sent here first for proper formatting and printing.

This is a brief explanation of how slips works for new contributors.

All modules described above are talked about in more detail in the rest of the documentation.


### Code Docs

slips.py serves as the entry point for the slips framework, responsible for orchestrating its various components and ensuring smooth analysis processes.

### Input Process Management

- slips.py initiates the input process, responsible for ingesting flow data from specified files using the `-f` option. It intelligently detects the file type, reads it, and forwards the flows to the profiler process. Additionally, when handling PCAP files or interfacing directly with network interfaces, slips.py spawns a Zeek thread, leveraging slips' customized Zeek configuration to analyze the data and relay the processed flows to the profiler process.

### Update Manager Operation

- Furthermore, slips.py manages the update manager, tasked with keeping slips' local threat intelligence (TI) files up to date. This includes files stored in directories like `slips_files/organizations_info` and `slips_files/ports_info`. During startup, slips.py triggers the update manager to refresh remote TI files as well, ensuring comprehensive threat intelligence coverage.

### Profiler Process Workflow

- Upon receiving flows from the input process, the profiler process transforms them into a format compatible with slips' analysis capabilities. It establishes profiles and time windows for each encountered IP address, facilitating efficient data processing and analysis.

### Module Integration and Database Utilization

- The profiler process then routes flows to relevant modules based on their characteristics. For instance, flows from `http.log` are directed to `http_analyzer.py` for in-depth analysis. Data, profiles, and other relevant information are stored in slips' databases, leveraging both SQLite for persistent storage of labeled flows and Redis for real-time operations, enhancing overall performance.

### Human-Readable Output and User Interface Integration

- Utilizing the stored data and leveraging the timeline module, slips generates human-readable representations of flows, pivotal for the functionality of the web UI and kalipso UI, enhancing user experience and comprehension.

### Evidence Handling and Sharing

- Detected evidence undergoes scrutiny by the whitelist to ensure it's not whitelisted per the configuration in `config/whitelist.conf`. Non-whitelisted evidence proceeds to the evidence process, logging and distributing it to relevant modules responsible for exporting evidence. Enabling modules like CEST, Exporting modules, or CYST facilitates seamless evidence sharing and collaboration.

### Blocking and P2P Functionality

- With the blocking module enabled, slips can share detected alerts with the blocking module for immediate action, including blocking attacker IPs through the Linux firewall (supported on Linux platforms). Furthermore, if P2P functionality is enabled, slips' P2P module disseminates attacker information and blocking requests across the network, enhancing collective security measures.

### Output Process Handling

- Finally, slips' output process acts as a robust logging framework, formatting and printing all alerts, warnings, and informational messages for clear understanding and operational visibility.

This summary provides new contributors with an overview of slips' core functionalities. For detailed information on individual modules and processes, refer to the comprehensive documentation available.



## Code Docs

[Slips auto-generated code documentation here](https://stratosphereips.github.io/StratosphereLinuxIPS/files.html)