API documentation

Instance Module

Instance Components

makalii.instance.instance_factory.get_instance_base(user_settings, rootdir='./')

Maker-map for returning base-class interface for appropriate API derived class

Args:

user_settings: json file of user settings

rootdir: top-level directory (to find package data)

Returns:

base class InstanceBase object

class makalii.instance.instance_base.InstanceBase(uset, rootdir='./')

Bases: object

Base class defining the interface to instance operation client/resource clients

Constructor args:

uset: user settings (from .json file)

rootdir: top level directory to find files/scripts

edit_instance_template()

Parse the instance template information using user info

get_inst_user_name()

Return the name of the instance at the remote location

get_ip_addresses()

For appropriately identified instance group, get a list of the public and private IP address

get_private_dns_names()

Return the private dns names Appropriate for slurm setup and/or mpi hosts file

get_resource_object()

Return the main resource object for the underlying remote API (for testing)

get_storage_object(typeFlag='')

Return the main S3 storage object for the underlying remote API (for testing)

scp_to_instances(public_ips, setup_files)

Remote copy setup files to instances

Args:

public_ips: Public IPs of instances

setup_files: List of file names to remote copy

Returns: None

select_instance_dsize(dsize)

Parse specific cloud platform instance template and edit the requested disk size (GB)

Args:

dsize: disk size (GB)

Returns: None

select_instance_image(imageStr)

Parse specific cloud platform instance template and edit the image bare-metal type

Args:

imageStr: String for instance type

Returns: None

select_instance_type(typeStr)

Parse specific cloud platform instance template and edit the instance type

Args:

typeStr: String for AWS instance type

Returns: None

setup_shared_dir(public_ips, private_ips)

Perform remote setup on ‘master’ instance for creating an NFS shared directory and have each compute node… mount the directory on the NFS server (master) node Note: this is over-ridden in derived classes where a shared directory already exists

Args:

public_ips - Public IPs for access to instances

private_ips - Private IPs for setting in mount command

ssh_to_master(public_ip_list, instance_index=0)

Dump a script to connect ‘directly’ to the master instance (by default) NOTE: this is for devel/debug on unix (not available on Windows)

Args:

public_ip_list: list of public IP addresses

instance_index: index in IP list of instance to connect

Returns: None

start_instances()

Create and start cloud instances

stop_instances()

Stop ‘running’ instances

Args: None Returns: None

terminate_instances()

Terminate ‘tagged’ instances

Args: None Returns: None

class makalii.instance.instance_aws.InstanceAWS(uset, rootdir)

Bases: makalii.instance.instance_base.InstanceBase

Derived class implementing interface in the base class using Amazon Web Services boto3 module

Constructor args:

uset: user settings (from .json file)

rootdir: top level directory to find files/scripts

edit_instance_template()

Parse the instance template information using user info in AWS instance

get_ip_addresses()

For appropriately identified instance group, get a list of the public and private IP address

Args:

filteredInst: appropriately filtered instances (not checking if valid)

Returns:

public,private IP address lists

get_resource_object()

Return the main resource object for the underlying remote API (for testing)

Returns: return boto3 class object created with b3.client(‘ec2’)

get_storage_object(typeFlag='')

Return the main S3 storage object for the underlying remote API (for testing) Args:

typeFlag (str): settings [‘’, ‘r’] flag for which s3 object to return

Returns: return boto3 class object created with b3.client(‘s3’) or b3.resource(‘s3’)

select_instance_dsize(dsize)

Parse specific cloud platform instance template and edit the requested disk size (GB)

Args:

dsize: disk size (GB)

Returns: None

select_instance_image(imageStr)

Parse specific cloud platform instance template for AWS and edit the image type (eg ami-07a6716a7f1ee6d61, ami-e251209a etc)

Args:

imageStr: String for AWS image type

Returns: None

select_instance_type(typeStr)

Parse specific cloud platform instance template for AWS and edit the instance type

Args:

typeStr: String for AWS instance type

Returns: None

setup_shared_dir(public_ips, private_ips)

Perform remote setup on ‘master’ instance for creating an NFS shared directory and have each compute node, mount the directory on the NFS server (master) node Note: this is over-ridden in derived classes where a shared directory already exists

Args:

public_ips - Public IPs for access to instances

private_ips - Private IPs for setting in mount command

start_instances()

Uses boto3 API to create and start AWS instance

stop_instances()

Uses boto3 API to stop filtered running AWS instances.

terminate_instances()

Uses boto3 API to terminate filtered AWS instances

class makalii.instance.instance_azure.InstanceAzure(uset, rootdir)

Bases: makalii.instance.instance_base.InstanceBase

Derived class implementing interface in the base class using Azure Python API module

Constructor args:

uset: user settings (from .json file)

rootdir: top level directory to find files/scripts

edit_instance_template()

Parse the instance template information using user info in Azure instance

get_ip_addresses()

For appropriately identified instance group, get a list of the public and private IP address

Returns:

list of public,private IPs

get_resource_object()

Return the main resource object for the underlying remote API (for testing)

get_storage_object(typeFlag='')

Return the main S3 storage object for the underlying remote API (for testing)

select_instance_dsize(dsize)

Parse specific cloud platform instance template and edit the requested disk size (GB)

Args:

dsize: disk size (GB)

Returns: None

select_instance_image(imageStr)

Parse specific cloud platform instance template for Azure and edit the image type

Args:

imageStr: String for Azure image type

Returns: None

select_instance_type(typeStr)

Parse specific cloud platform instance template for Azure and edit the instance type

Args:

typeStr: String for Azure instance type

Returns: None

setup_shared_dir(public_ips, private_ips)

Perform remote setup on ‘master’ instance for creating an NFS shared directory and have each compute node, mount the directory on the NFS server (master) node Note: this is over-ridden in derived classes where a shared directory already exists

Args:

public_ips - Public IPs for access to instances

private_ips - Private IPs for setting in mount command

start_instances()

Uses Azure API to create and start instances

stop_instances()

Uses Azure API to stop running Azure instances.

terminate_instances()

Uses Azure API to terminate tagged Azure instances

class makalii.instance.instance_txdocker.InstanceTxdocker(uset, rootdir)

Bases: makalii.instance.instance_base.InstanceBase

Derived class implementing interface in the base class for a specific local cluster at Tech-X the docker1,2… machines

Constructor args:

uset: user settings (from .json file)

rootdir: top level directory to find files/scripts

edit_instance_template()

Not implemented

get_ip_addresses()

For appropriately identified group, get a list of the public and private IP address. IPs hardwired for local Tech-X ‘docker’ cluster

Returns:

public,private IP address lists

get_resource_object()

Return the main resource object for the underlying remote API (for testing)

get_storage_object()

Return the main S3 storage object for the underlying remote API (for testing)

select_instance_dsize(dsize)

Parse specific cloud platform instance template and edit the requested disk size (GB)

Args:

dsize: disk size (GB)

Returns: None

select_instance_image(imageStr)

Parse specific cloud platform instance template for Azure and edit the image type

Args:

imageStr: String for Azure image type

Returns: None

select_instance_type(typeStr)

Parse specific cloud platform instance template for Azure and edit the instance type

Args:

typeStr: String for Azure instance type

show: Print new template

Returns: None

setup_shared_dir(public_ips, private_ips)

Overriding since shared directory already exists

Args:

public_ips - Public IPs for access to instances

private_ips - Private IPs for setting in mount command

start_instances()

Not implemented

stop_instances()

Not implemented

terminate_instances()

Not implemented

Container Module

Container Components

makalii.container.container_factory.get_container_base(user_settings, rootdir='./', passwd='devel')

Maker-map for returning base-class interface for appropriate remote setup derived class

Args:

user_settings: json file of user settings

rootdir: top-level directory (to find package data)

passwd: security password for container

Returns:

base class ContainerBase object

class makalii.container.container_base.ContainerBase(uset, rootdir, passwd)

Bases: object

Base class defining the interface to cloud operation client/resource clients

Constructor args:

uset: user settings (from .json file)

rootdir: top level directory to find files/scripts

passwd: password to be used in accessing remote containers

container_reset_pswd(public_ip_list)

Use a random password and reset within each running containers for the listed instance Currently runs bash script remotely (because of initial issues running paramiko for this)

Args:

public_ip_list: list of public IP addresses

docker_compose_setup(public_ips)

Remote setup of docker compose

Args:

public_ips: Public IPs of instances

dump_compose_yaml(private_ips, yaml_name, interface_name, host_shared_dir, use_slurm, is_slurm_slave)

Dump docker-compose.yaml file with updated IP info and image type

Args:

private_ips: Private IPs

yaml_name: Name of yaml file (full path) to be used by docker run start

interface_name: Name of interface (defaults to eth0)”

host_shared_dir: Location of NFS shared directory on host instance

use_slurm: Flag for including slurm environment variables

is_slurm_slave: Sets the ENV_ROLE env variable

dump_yaml_file(private_ips, yaml_name, interface_name, host_shared_dir)

Dump compose.yaml file with updated IP info

Args:

private_ips: Private IPs

yaml_name: Name of yaml file (full path) to be used by docker run start

interface_name: Name of interface (defaults to eth0)”

host_shared_dir: Location of NFS shared directory on host instance

editSlurmTemplate(hostNameList)

Take host info and edit the slurm config file

Args:

hostNameList: appropriately formatted private IP hostname list

# Example of node name line to be filled in # NodeName= CPUs=2 Boards=1 SocketsPerBoard=1 CoresPerSocket=1 ThreadsPerCore=2 RealMemory=3921 State=UNKNOWN

exec_cmd_as_user_in_containers(public_ips, remote_cmd_list)

Execute remote command inside all containers Instances/containers must have been set previously

Args:

public_ips: Public IPs of instance on which container is running

remote_cmd_list: List of string commands to execute as user from home in container

pull_simulation_images(public_ips, master_only=False)

Remote pull of simulation images to instances on each machine

Args:

public_ips: Public IPs of instances

remote_docker_setup(public_ips, inst_settings)

Remote setup/start docker

Args:

public_ips: Public IPs of instances

inst_settings: JSON dictionary of instance creation settings

remote_stop_containers(public_ips)

Remote force removal of all containers. ‘Ghost’ containers can corrupt running containers of the same sort, so need to clean before container starting

Args:

public_ips: Public IPs of instances

restart_slurm(public_ips)

Remote restart of slurmd and slurmclt(on master node) across containers

Args:

public_ips: Public IPs of instances

scp_from_master_cntr(public_ips, fname, local_dir_path='./')

Remote copy single file from master container to local session Containers must have been correctly started on remote instance and listening on port specified in settings

Args:

public_ips: Public IPs of instance on which container is running

fname: Name of file (by default in the HOME directory in container

local_dir_path: Local directory into which to scp target fname

scp_to_containers(public_ips, targets_list, remote_dir_path='./')

Remote copy setup files to running containers Containers must have been correctly started on remote instance and listening on the container port specified in this class data

Args:

public_ips: Public IPs of instance on which container is running

targets_list: List of ‘targets’ (files/directories) to remote copy

remote_dir_path: Destination path in remote container (relative to home)

ssh_to_master(public_ip_list)

Connect to a bash session in container running on the master instance and dump a connect.sh script. This edits the bashrc

Args:

public_ip_list: list of public IP addresses

start_containers(public_ips, yaml_file_name)

Remote start of computational/communication containers on all instances. Consul image is run only on ‘master’ instance

Args:

public_ips: Public IPs of instances

yaml_file_name: yaml file setup name for appr program

stop_slurm(public_ips)

Remote stop of slurmd, slurmclt(on master node) across containers

Args:

public_ips: Public IPs of instances

class makalii.container.container_aws.ContainerAWS(uset, rootdir='./', passwd='devel')

Bases: makalii.container.container_base.ContainerBase

Remote setup methods specific to AWS

Constructor args:

uset: user settings (from .json file)

rootdir: top level directory to find files/scripts

passwd: password to be used in accessing remote containers

dump_compose_yaml(private_ips, yaml_name, use_slurm=False, is_slurm_slave=True)

Dump docker-compose yaml file with updated IP info

Args:

private_ips: Private IPs

yaml_name: Complete path name of yaml file to be used by docker run start

use_slurm: Flag for including slurm environment variables

is_slurm_slave: Sets the ENV_ROLE env variable

dump_yaml_file(private_ips, yaml_name)

Dump compose.yaml file with updated IP info and specific AWS settings

Args:

private_ips: Private IPs yaml_name: Complete path name of yaml file to be used by docker run start

remote_docker_setup(public_ips, inst_settings)

Remote setup of UberCloud appr and setup/start docker specifically for AWS (this will depend on instance type as well)

Args:

public_ips: Public IPs of instances

inst_settings: JSON dictionary of instance creation settings

class makalii.container.container_azure.ContainerAzure(uset, rootdir='./', passwd='devel')

Bases: makalii.container.container_base.ContainerBase

Remote setup methods specific to AWS

Constructor args:

uset: user settings (from .json file)

rootdir: top level directory to find files/scripts

passwd: password to be used in accessing remote containers

dump_compose_yaml(private_ips, yaml_name, use_slurm=False, is_slurm_slave=True)

Dump docker-compose yaml file with updated IP info

Args:

private_ips: Private IPs

yaml_name: Complete path name of yaml file to be used by docker run start

use_slurm: Flag for including slurm environment variables

is_slurm_slave: Sets the ENV_ROLE env variable

dump_yaml_file(private_ips, yaml_name)

Dump compose.yaml file with updated IP info and specific Azure settings

Args:

private_ips: Private IPs

yaml_name: Complete path name of yaml file to be used by docker run start

remote_docker_setup(public_ips, inst_settings)

Remote setup of UberCloud appr and setup/start docker specifically for Azure (this will depend on instance type as well)

Args:

public_ips: Public IPs of instances

inst_settings: JSON dictionary of instance creation settings

class makalii.container.container_txdocker.ContainerTxdocker(uset, rootdir='./', passwd='devel')

Bases: makalii.container.container_base.ContainerBase

Remote setup methods specific to docker1,2,… machines at Tech-X

Constructor args:

uset: user settings (from .json file)

rootdir: top level directory to find files/scripts

passwd: password to be used in accessing remote containers

dump_compose_yaml(private_ips, yaml_name, use_slurm=False, is_slurm_slave=True)

Dump docker-compose yaml file with updated IP info

Args:

private_ips: Private IPs

yaml_name: Complete path name of yaml file to be used by docker run start

use_slurm: Flag for including slurm environment variables

is_slurm_slave: Sets the ENV_ROLE env variable

dump_yaml_file(private_ips, yaml_name)

Dump compose.yaml file with updated IP info and specific AWS settings

Args:

private_ips: Private IPs

yaml_name: Complete path name of yaml file to be used by docker run start

remote_docker_setup(public_ips, inst_settings)

Docker already set on docker1,2,… machines. May check IP’s later and check on docker

Args:

public_ips: Public IPs of instances

inst_settings: JSON dictionary of instance creation settings

Reservoir Module

Reservoir Components

makalii.reservoir.reservoir_factory.get_reservoir_base(user_settings, storage_type, rootdir='./')

Maker-map for returning base-class interface for appropriate API derived class

Args:

user_settings: json file of user settings

rootdir (str): top-level directory (to find package data)

Returns:

base class ReservoirBase object

class makalii.reservoir.reservoir_base.ReservoirBase(uset, rootdir='./')

Bases: object

Base class defining the interface to Reservoir operation client/resource clients

Constructor args:

uset: user settings (from .json file)

rootdir: top level directory to find files/scripts

delete_bucket_objects(bucketName, key)

Delete bucket object(s). If key name is a folder(directory) then all objects in that folder are deleted. If a single file key is specified, then only that file is deleted

Args:

bucketName (str): bucket name string key (str): folder(file) prefix string

download_bucket_objects(bucketName, keys, localDir='', overwriteFlag=False)

Take list of key name (which can include directory names) in a specified top-level bucket and download the objects If directories do not exist they are created. Existing directories will be overwritten with new files from bucket

Args:

bucketName (str): name of top-level storage object keys (list): strings of files (with full path from where module)

is running) to download

localDir (str): local directory location for download [default ‘’] overwriteFlag (bool): flag of whether download will overwrite

any local files

get_bucket_names()

Get top level storage buckets from S3 AWS

Return:

list of bucket name strings

get_bucket_object_names(bucketName, searchDir='', searchFile='')

Recursively list all objects within top-level reservoir

Args:

bucketName (str): string name of AWS S3 bucket searchDir (str): default (empty) search string for directory names searchFile (str): default (empty) search string for file names

Return:

list of keys matching search (or all objects in bucket)

get_object_type_name()

Name of object for printing info

get_storage_object(typeFlag='')

Return the main storage object for the underlying remote API (for testing)

upload_bucket_objects(bucketName, path, remoteDir='', excludeList=[])

Upload contents of path name directory to a storage object

Args:

bucketName (str): string name of Azure bucket path (str): directory name to upload remoteDir (str): remote folder local to place uploads [default = ‘’] excludeList (list): list of search strings to exclude from upload

class makalii.reservoir.reservoir_aws.ReservoirAWS(uset, rootdir)

Bases: makalii.reservoir.reservoir_base.ReservoirBase

Derived class implementing interface in the base class using Amazon Web Services boto3 module

Constructor args:

uset: user settings (from .json file)

rootdir: top level directory to find files/scripts

delete_bucket_objects(bucketName, key)

Delete bucket object(s). If key name is a folder(directory) then all objects in that folder are deleted. If a single file key is specified, then only that file is deleted

Args:

bucketName (str): bucket name string key (str): folder(file) prefix string

download_bucket_objects(bucketName, keys, localDir='', overwriteFlag=False)

Take list of key name (which can include directory names) in a specified top-level bucket and download the objects If directories do not exist they are created. Existing directories will be overwritten with new files from bucket

Args:

bucketName (str): name of top-level S3 bucket keys (list): strings of file (with full path from where module)

is running) to download

localDir (str): local directory location for download [default ‘’] overwriteFlag (bool): flag of whether download will overwrite

any local files

get_bucket_names()

Get all top level storage buckets from S3 AWS regardless of password access (this is not exposed through the driver)

Return:

list of bucket name strings

get_bucket_object_names(bucketName, searchDir='', searchFile='')

Recursively list all objects within an S3 bucket

Args:

bucketName (str): string name of AWS S3 bucket searchDir (str): default (empty) search string for directory names searchFile (str): default (empty) search string for file names

Return:

list of keys matching search (or all objects in bucket) list of key sizes (in bytes)

get_storage_object(typeFlag='')

Return the main S3 storage object for the underlying remote API (for testing) Args:

typeFlag (str): settings [‘’, ‘r’] flag for which s3 object to return

Returns: return boto3 class object created with b3.client(‘s3’) or b3.resource(‘s3’)

upload_bucket_objects(bucketName, path, remoteDir='', excludeList=[])

Upload contents of path name directory to an S3 bucket Overwrites existing contents

Args:

bucketName (str): string name of AWS S3 bucket path (str): directory name to upload remoteDir (str): remote folder local to place uploads [default = ‘’] excludeList (list): list of search strings to exclude from upload

class makalii.reservoir.reservoir_azure.ReservoirAzure(uset, rootdir)

Bases: makalii.reservoir.reservoir_base.ReservoirBase

Derived class implementing interface in the base class using Azure Python API module

Constructor args:

uset: user settings (from .json file)

rootdir: top level directory to find files/scripts

delete_bucket_objects(bucketName, key)

Delete bucket object(s). If key name is a folder(directory) then all objects in that folder are deleted. If a single file key is specified, then only that file is deleted

Args:

bucketName (str): bucket name string key (str): folder(file) prefix string

download_bucket_objects(bucketName, keys, localDir='', overwriteFlag=False)

Take list of key name (which can include directory names) in a specified top-level bucket and download the objects If directories do not exist they are created. Existing directories will be overwritten with new files from bucket

Args:

bucketName (str): name of top-level storage object keys (list): strings of files (with full path from where module)

is running) to download

localDir (str): local directory location for download [default ‘’] overwriteFlag (bool): flag of whether download will overwrite

any local files

get_bucket_names()

Get all top level storage buckets from Azure Storage regardless of password access (this is not exposed through the driver)

Return:

list of bucket name strings

get_bucket_object_names(bucketName, searchDir='', searchFile='')

Recursively list all objects within top-level reservoir

Args:

bucketName (str): string name of Azure blob ‘bucket’ searchDir (str): default (empty) search string for directory names searchFile (str): default (empty) search string for file names

Return:

list of keys matching search (or all objects in bucket)

get_storage_object(typeFlag='')

Return the main Azure storage object for the underlying remote API (for testing)

upload_bucket_objects(bucketName, path, remoteDir='', excludeList=[])

Upload contents of path name directory to an Azure bucket. Overwrites existing contents

Args:

bucketName (str): string name of Azure bucket path (str): directory name to upload remoteDir (str): remote folder local to place uploads [default = ‘’] excludeList (list): list of search strings to exclude from upload