Course Overview:

The focus of Red Hat OpenStack Administration I: Core Operations for Cloud Operators (CL110) will be managing OpenStack using both the web-based dashboard and the command-line interface, in addition to managing instances and installing a proof-of-concept environment using Red Hat OpenStack Platform (RHOSP) director. Essential skills covered in the course include configuring Red Hat OpenStack Platform (using the director UI); managing users, projects, flavors, roles, images, networking, and block storage; setting quotas; and configuring images at instantiation.

Attendees to CL-115 Red Hat OpenStack Administration I: Core Operations for Cloud Operators will receive TechNow approved course materials and expert instruction.

Dates/Locations:

No Events

Duration: 5 Days

Prerequisites:

This course is designed for Linux system administrators, cloud administrators, and cloud operators interested in, or responsible for, maintaining a private or hybrid cloud.

Prerequisites for this course is Red Hat Certified System Administrator (RHCSA) or demonstrate equivalent experience

Course Outline:

  • Launch an instance
  • Manage projects, quotas, and users
  • Manage networks, subnets, routers, and floating IP adresses
  • Create and manage block and object storage in the OpenStack framework
  • Customize instances with cloud-init
  • Deploy scalable stacks
  • Deploy RedHat OpenStack Platform using RHOSP director

 

Comments

Latest comments from students


 

Liked the class?  Then let everyone know!

 

Course Overview:

In this course, the students will implement various data platform technologies into solutions that are in-line with business and technical requirements, including on-premises, cloud, and hybrid data scenarios incorporating both relational and NoSQL data. They will also learn how to process data using a range of technologies and languages for both streaming and batch data.

The students will also explore how to implement data security, including authentication, authorization, data policies, and standards. They will also define and implement data solution monitoring for both the data storage and data processing activities. Finally, they will manage and troubleshoot Azure data solutions which includes the optimization and disaster recovery of big data, batch processing, and streaming data solutions.

TechNow has worked worldwide enterprise infrastructures for over 20 years and has developed demos and labs to exemplify the techniques required to demonstrate cloud technologies and to effectively manage security in the cloud environment.

Attendees to DP-200: Implementing an Azure Data Solution will receive TechNow approved course materials and expert instruction.

Date/Locations:

No Events

Course Duration: 4 days

Course Outline:

  • Azure for the Data Engineer
  • Working with Data Storage
  • Enabling Team Based Data Science with Azure Databricks
  • Building Globally Distributed Databases with Cosmos DB
  • Working with Relational Data Stores in the Cloud
  • Performing Real-Time Analytics with Stream Analytics
  • Orchestrating Data Movement with Azure Data Factory
  • Securing Azure Data Platforms
  • Monitoring and Troubleshooting Data Storage and Processing

Prerequisites :

      • In addition to their professional experience, students who take this training should have technical knowledge equivalent to the following courses:
      • AZ-900: Microsoft Azure Fundamentals

Comments

Latest comments from students


 

Liked the class?  Then let everyone know!

TechNow has heard many students talk about virtualized/remote training that TechNow Does Not Do.  While training our most recent offering of PA-215: Palo Alto Networks Firewall Essentials FastTrack a student told his story of how he endend up in our course.  His story we have heard for other technologies like Cisco, VMware, BlueCoat and other products.

A large percentage of training is moving to the virtualized/remote lab environments.  Students are asked to use some variant of remote access software and remote into the training company's lab environment. Our student in our Palo Alto Networks Firewall course informed us that he went to a very costly offering of that course from the vendor and was not able to perform any labs.  There were either network connectivity issues, or issues with the remote access software, or other problems.  The whole training experience was very frustrating and not productive.

We keep our labs open to students if they would like after hours, or before hours access.  Repeatedly going through a lab engrains that knowledge for later recall.  Touching hardware is so critical in understanding the problems that arise when a cable comes loose, or a cable gets plugged in the wrong port.  There are other scenarios such as just pulling the power cable, or turning off a power strip, or accidently overwriting a configuration.  These disaster scenarious requires hands-on physical access to hardware.  Preventing and recovering from disasters is what it's all about, and that requires hands-on, instructor led, real hardware.