yawning_titan.envs.specific.five_node_def#

NOTE: This environment is deprecated but has been included as an example of how to create a specific environment.

Five Node Environment AKA Cyber Whack A Mole#

This environment is made up of five nodes in the following topology:

+————+ +————+ +————+ +————+ +————+ | | | | | | | | | | | Node 1 | | Node 2 | | Node 3 | | Node 4 | | Node 5 | | | | | | | | | | | +————+ +————+ +————+ +————+ +————+

Configurable Parameters:

Number of Machines - This value determines the number of machines within the environment and defaults to 5. Number of Compromised Machines for Loss - This value determines how many compromised machines equal a loss. Attack Success Threshold - This value determines what the red agents attack value must be to be successful.

Classes

FiveNodeDef

OpenAI Gym Environment for Cyber Whack-a-Mole.

class yawning_titan.envs.specific.five_node_def.FiveNodeDef(attacker_skill=50, n_machines=5, attack_success_threshold=0.3, no_compromised_machine_loss=4)[source]#

OpenAI Gym Environment for Cyber Whack-a-Mole.

__init__(attacker_skill=50, n_machines=5, attack_success_threshold=0.3, no_compromised_machine_loss=4)[source]#
reset()[source]#

Reset the environment to the default state.

Returns:

A new starting observation (numpy array)

step(action)[source]#

Take a time step and execute the actions for both Blue RL agent and hard-hard coded Red agent.

Parameters:

action – The action value generated from the Blue RL agent (int)

Returns:

The next environment observation (numpy array) reward: The reward value for that timestep (int) done: Whether the epsiode is done (bool)

Return type:

observation