IoT Security Lab Docs
Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage
Edit page

Introduction

The IoT Securoty Laboratory support GPU accelerated Machine Learning and Deep Learning research workloads. The power of this system is in its multiple state-of-the-art GPUs Nvidia A100 40GB. The IoT Security Laboratory servers are available for UTSA students, faculty and affiliated people to the laboratory. Currently, the only way to access them is through the UTSA Networks, either with a direct connection over WiFi or using the VPN. The system software support anaconda, which allows you to use a wide variety of python modules for different data processing frameworks, such a TensorFlow and PyTorch.

File Systems

IoT Security lab servers mounts two shared ext4 file systems (FS) on which each user has corresponding node-specific directories $HOME and $WORK. In the login node, both FS are available, however, in the compute nodes, only the work directory is accessable. There is an alias to traverse between the FS, cdw to go to $WORK.

In addition, there are directories for storing large datasets under /datasets which are globally accesible with read-only permission. If you need a dataset available, please contact one of the administrators.

Good Conduct

You share IoTSec servers with many, sometimes hundreds, of others users, and what you do on the system affects others. All users must follow a set of good practices which entail limiting activities that may impact the system for other users. Exercise good conduct to ensure that your activity does not adversely impact the system and the research community with whom you share it.

System Architecture

The IoTSec computations system consists of 2 mainly computing servers divided into nodes. A node is an abstraction to represent a virtual system designated for a specific purpose, for example, the login node which is shared among all users.

Several users may be logged on at one time accessing the file systems. Dozens (sometimes hundreds) of jobs may be running on the login node and the resources are shared and equally divided in a first-come-first-server basis. In this way, the login nodes provide an interface to interact with any of the components and is the main point of entry.
Think of the login node as a prep area, where users may edit and manage files, compile short pieces of code, perform file management, issue transfers, submit and track existing jobs and more.

Do not run jobs or perform intensive computational activity on the login nodes or the shared file systems. Your account may be suspended if your jobs are impacting other users.

In the other hand, the compute nodes are where actual computations occur and where research is performed. All batch jobs, as well as development and debuggin sessions, must be run on the compute nodes. To access the compute nodes, one must initate an interactive session opening a SSH session by running ssh compute-node.

System Specifications

Login Node

Specification Description
vCPU x8 Xeon Gold 5118
RAM 16GB DDR4 2400MHz
Storage 1TB HDD Raid 0
Network 1Gbps Interface
OS Ubuntu 22.04
Kernel 5.15

Compute Nodes

Specification Description
vCPU AMD Epyc 7302 3.3GHz 16C/32T
RAM 512GB DDR4 3200MHz
TPU Nvidia A100 40GB HBM2e PCIe
Storage 3.5TB NVMe SSD
Network 1Gbps Interface
OS Ubuntu 20.04
PSU Redundant 2000 Watts
Kernel 5.4
CUDA 11.1

Administrators

You can contact any of the administrator via email. Make sure to use your official student email since external email domains will be ignored for security reasons.

Kevin email: cse894 [at] my utsa …

Desiree email: fnw920 [at] my utsa …