Researchers at Academic Medical Centers (AMCs) use programs such as Observational Health Data Sciences and Informatics (OHDSI) and Research Electronic Data Capture (REDCap) to interact with healthcare data. Our internal team at AWS has provided solutions such as OHDSI-on-AWS and REDCap environments on AWS to help clinicians analyze healthcare data in the AWS Cloud. Occasionally, these solutions break due to a change in some portion of the solution (e.g. updated services). The Automated Solutions Testing Pipeline enables our team to take a proactive approach to discovering these breaks and their cause in order to expedite the repair process.
OHDSI-on-AWS provides these AMCs with the ability to store and analyze observational health data in the AWS cloud. REDCap is a web application for managing surveys and databases with HIPAA-compliant environments. Using our solutions, these programs can be spun up easily on the AWS infrastructure using AWS CloudFormation templates.
Updates to AWS services and other program libraries can cause the CloudFormation template to fail during deployment. Other times, the outputs may not be operating correctly, or the template may not work on every AWS region. This can create a negative customer experience. Some customers may discover this kind of break and decide to not move forward with using the solution. Other customers may not even realize the solution is broken, so they might be unknowingly working with an uncooperative environment. Furthermore, we cannot always provide fast support to the customers who contact us about broken solutions. To meet our team’s needs and the needs of our customers, we decided to focus our efforts on taking a CI/CD approach to maintain these solutions. We developed the Automated Testing Pipeline which regularly tests solution deployment and changes to source files.
This post shows the features of the Automated Testing Pipeline and provides resources to help you get started using it with your AWS account.
Overview of Automated Testing Pipeline Solution
The Automated Testing Pipeline solution as a whole is designed to automatically deploy CloudFormation templates, run tests against the deployed environments, send notifications if an issue is discovered, and allow for insightful testing data to be easily explored.
CloudFormation templates to be tested are stored in an Amazon S3 bucket. Custom test scripts and TaskCat deployment configuration are stored in an AWS CodeCommit repository.
The pipeline is triggered in one of three ways: an update to the CloudFormation Template in S3, an Amazon CloudWatch events rule, and an update to the testing source code repository. Once the pipeline has been triggered, AWS CodeBuild pulls the source code to deploy the CloudFormation template, test the deployed environment, and store the results in an S3 bucket. If any failures are discovered, subscribers to the failure topic are notified. The following diagram shows its overall architecture.
In order to create the Automated Testing Pipeline, two interns collaborated over the course of 5 weeks to produce the architecture and custom test scripts. We divided the work of constructing a serverless architecture and writing out test scripts for the output urls for OHDSI-on-AWS and REDCap environments on AWS.
The following tasks were completed to build out the Automated Testing Pipeline solution:
- Setup AWS IAM roles for accessing AWS resources securely
- Create CloudWatch events to trigger AWS CodePipeline
- Setup CodePipeline and CodeBuild to run TaskCat and testing scripts
- Configure TaskCat to deploy CloudFormation solutions in various AWS Regions
- Write test scripts to interact with CloudFormation solutions’ deployed environments
- Subscribe to receive emails detailing test results
- Create a CloudFormation template for the Automated Testing Pipeline
The architecture can be extended to test any CloudFormation stack. For this particular use case, we wrote the test scripts specifically to test the urls output by the CloudFormation solutions. The Automated Testing Pipeline has the following features:
- Deployed in a single AWS Region, with the exception of the tested CloudFormation solution
- Has a serverless architecture operating at the AWS Region level
- Deploys a pipeline which can deploy and test the CloudFormation solution
- Creates CloudWatch events to activate the pipeline on a schedule or when the solution is updated
- Creates an Amazon SNS topic for notifying subscribers when there are errors
- Includes code for running TaskCat and scripts to test solution functionality
- Built automatically in minutes
- Low in cost with free tier benefits
The pipeline is triggered automatically when an event occurs. These events include a change to the CloudFormation solution template, a change to the code in the testing repository, and an alarm set off by a regular schedule. Additional events can be added in the CloudWatch console.
When the pipeline is triggered, the testing environment is set up by CodeBuild. CodeBuild uses a build specification file kept within our source repository to set up the environment and run the test scripts. We created a CodeCommit repository to host the test scripts alongside the build specification. The build specification includes commands run TaskCat — an open-source tool for testing the deployment of CloudFormation templates. TaskCat provides the ability to test the deployment of the CloudFormation solution, but we needed custom test scripts to ensure that we can interact with the deployed environment as expected. If the template is successfully deployed, CodeBuild handles running the test scripts against the CloudFormation solution environment. In our case, the environment is accessed via urls output by the CloudFormation solution.
def log_in(driver, user, passw, link, btn_path, title): """Enter username and password then submit to log in :param driver: webdriver for Chrome page :param user: username as String :param passw: password as String :param link: url for page being tested as String :param btn_path: xpath to submit button :param title: expected page title upon successful sign in :return: success String tuple if log in completed, failure description tuple String otherwise """ try: # post username and password data driver.find_element_by_xpath("//input[ @name="username" ]").send_keys(user) driver.find_element_by_xpath("//input[ @name="password" ]").send_keys(passw) # click sign in button and wait for page update driver.find_element_by_xpath(btn_path).click() except NoSuchElementException: return 'FAILURE', 'Unable to access page elements' try: WebDriverWait(driver, 20).until(ec.url_changes(link)) WebDriverWait(driver, 20).until(ec.title_is(title)) except TimeoutException as e: print("Timeout occurred (" + e + ") while attempting to sign in to " + driver.current_url) if "Sign In" in driver.title or "invalid user" in driver.page_source.lower(): return 'FAILURE', 'Incorrect username or password' else: return 'FAILURE', 'Sign in attempt timed out' return 'SUCCESS', 'Sign in complete'
We store the test results in JSON format for ease of parsing. TaskCat generates a dashboard which we customize to display these test results. We are able to insert our JSON results into the dashboard in order to make it easy to find errors and access log files. This dashboard is a static html file that can be hosted on an S3 bucket. In addition, messages are published to topics in SNS whenever an error occurs which provide a link to this dashboard.
In true CI/CD fashion, this end-to-end design automatically performs tasks that would otherwise be performed manually. We have shown how deploying solutions, testing solutions, notifying maintainers, and providing a results dashboard are all actions handled entirely by the Automated Testing Pipeline.
Getting Started with the Automated Testing Pipeline
Prerequisite tasks to complete before deploying the pipeline:
Once the prerequisite tasks are completed, the pipeline is ready to be deployed. Detailed information about deployment, altering the source code to fit your use case, and troubleshooting issues can be found at the GitHub page for the Automated Testing Pipeline.
For those looking to jump right into deployment, click the Launch Stack button below.
Tasks to complete after deployment:
- Subscribe to SNS topic for error messages
- Update the code to match the parameters and CloudFormation template that were chosen
- Skip this step if you are testing OHDSI-on-AWS. Upload the desired CloudFormation template to the created source S3 Bucket
- Push the source code to the created CodeCommit Repository
After the code is pushed to the CodeCommit repository and the CloudFormation template has been uploaded to S3, the pipeline will run automatically. You can visit the CodePipeline console to confirm that the pipeline is running with an “in progress” status.
You may desire to alter various aspects of the Automated Testing Pipeline to better fit your use case. Listed below are some actions you can take to modify the solution to fit your needs:
- Go to CloudWatch Events and update rules for automatically started the pipeline.
- Scale out testing by providing custom testing scripts or altering the existing ones.
- Test a different CloudFormation template by uploading it to the source S3 bucket created and configuring the pipeline accordingly. Custom test scripts will likely be required for this use case.
Challenges Addressed by the Automated Testing Pipeline
The Automated Testing Pipeline directly addresses the challenges we faced with maintaining our OHDSI and REDCap solutions. Additionally, the pipeline can be used whenever there is a need to test CloudFormation templates that are being used on a regular basis or are distributed to other users. Listed below is the set of specific challenges we faced maintaining CloudFormation solutions and how the pipeline addresses them.
The desire to better serve our customers guided our decision to create the Automated Testing Pipeline. For example, we know that source code used to build the OHDSI-on-AWS environment changes on occasion. Some of these changes have caused the environment to stop functioning correctly. This left us with cases where our customers had to either open an issue on GitHub or reach out to AWS directly for support. Our customers depend on OHDSI-on-AWS functioning properly, so fixing issues is of high priority to our team. The ability to run tests regularly allows us to take action without depending on notice from our customers. Now, we can be the first ones to know if something goes wrong and get to fixing it sooner.
“This automation will help us better monitor the CloudFormation-based projects our customers depend on to ensure they’re always in working order.” — James Wiggins, EDU HCLS SA Manager
If you decide to quit using the Automated Testing Pipeline, follow the steps below to get rid of the resources associated with it in your AWS account.
- Delete CloudFormation solution root Stack
- Delete pipeline CloudFormation Stack
- Delete ATLAS S3 Bucket if OHDSI-on-AWS was chosen
Deleting the pipeline CloudFormation stack handles removing the resources associated with its architecture. Depending on the CloudFormation template chosen for testing, additional resources associated with it may need to be removed. Visit our GitHub page for more information on removing resources.
The ability to continuously test preexisting solutions on AWS has great benefits for our team and our customers. The automated nature of this testing frees up time for us and our customers, and the dashboard makes issues more visible and easier to resolve. We believe that sharing this story can benefit anyone facing challenges maintaining CloudFormation solutions in AWS. Check out the Getting Started with the Automated Testing Pipeline section of this post to deploy the solution.
More information about the key services and open-source software used in our pipeline can be found at the following documentation pages:
About the Authors
Raleigh Hansen is a former Solutions Architect Intern on the Academic Medical Centers team at AWS. She is passionate about solving problems and improving upon existing systems. She also adores spending time with her two cats.
Dan Le is a former Solutions Architect Intern on the Academic Medical Centers team at AWS. He is passionate about technology and enjoys doing art and music.