Executive Summary

About Client

Newzealand’s most awarded mortgage & insurance advisor Global Finance caters to about 1,500+ customers for their mortgage or insurance needs every year so that they can meet their financial goals. Global Finance offers more preference & freedom, with loan approvals from numerous lenders if chosen by the customers. Dealing with a large number of clients & team members, Global Finance was facing issues managing their unstructured data. As Peritos had already been managing their Dynamics environment, we successfully guided and supported Global Finance’s move from saving data from Azure Dataverse to Azure blob Storage which saved them 1500$ a month.  

https://www.globalfinance.co.nz/

Location:Auckland, Newzealand

Project Background

Global Finance has been offering smarter loans and insurance since 1999. Working as one of the best mortgage & insurance advisers in NZ, Global Finance helped clients to save on their loans, by avoiding unnecessary interest and getting mortgage-free faster. Since the beginning, they have helped some customers become mortgage-free in as little as 7 years rather than the standard 30-year term. Global Finance was already using Dyn365 and saving data from Azure Dataverse, now moving to Azure Blob Storage has optimized for storing massive amounts of unstructured data for them.

Scope & Requirement

In the 1st Phase of the Windows Virtual Server Setup, implementation was discussed as follows:

  • Setting up for saving data from Azure Dataverse to Azure Blob Storage has sustained a lot of unstructured data for Global Finance
  • Setting up the demands for storing and analyzing large volumes of unstructured data have increased over the past decade & Azure Blob Storage is one solution that fulfills the enterprise needs accurately. 

Implementation

Technology and Architecture

Technology 

The migration was deployed with the below technological component
• For Azure Dataverse-The underlying technology used was Azure SQL Database

• For Azure Blob Storage- It supported the most popular development frameworks including Java, .NET, Python & Node.js

Security & Compliance:

  • Tagging Policies
  • Azure config for compliance checks
  • NIST compliance 
  • Guardrails
  • Security Hub

Backup and Recovery

Azure Backup provided a simple, secure, cost-effective, and cloud-based backup solution to protect the business or application-critical data stored in Azure Blob in two ways- Continuous backups & Periodic backups

Network Architecture 

  • Site to Site VPN Architecture using Transit Gateway
  • Distributed Azure Network Firewall
  • Monitoring with Cloud Watch and VPC flow logs. 

Cost Optimization 

  • Alerts and notifications are configured in the Azure cost 

Code Management, Deployment

  • Cloudformation scripts for creating stacksets and scripts for generating Azure services was handed over to the client  

Challenges of Migrating from Azure Dataverse to Azure Blob Storage

  • It was a bit of a challenge to ensure the new environment after migration meets all of the compliance criteria and still remain cost effective.

Project Completion

Duration

July  2022 ~ 1 week 

Deliverables

  • Dynamics License 
  • Power App License
  • Power App per use License
  • Power App per app license 

Support for Dynamics Discounted Licensing

  • For all Licenses we implement we provide monthly billing with 20 days credit Terms. 
  • We provide Value added services by sending reports to the client on the license usage and last activity date for each user to help them manage their license cost and to get visibility 

Testimonial

  • Azure Blob Storage has a lot of organizational features that has solved the storage problem of Global Finance at a lower cost. Despite being developed for unstructured data, containers permit businesses to construct their preferred categories by uploading specific blobs to specific containers.
  • Shifting from Azure Dataverse to Azure Blob Storage has provided a free hand to Global Finance to access objects in Blob Storage via the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage client library.
Feedback image

Now Global Finance is securely connected to Blob Storage by using SSH File Transfer Protocol (SFTP) & mount Blob Storage containers by using the Network File System 3.0 protocol. Peritos handled the Microsoft Dynamics 365 domain for Global Finance and provided discounted licensing, which proved very cost-effective. 

building Manager
Global Finace services

Next Phase

We are also  in discussion with other projects for the client

1. Dynamics CRM system Support 

2. O365 License Management 

If You Are Looking For Similar Services? Please Get In Touch

Executive Summary

About Client

Newzealand’s most awarded mortgage & insurance advisor Global Finance caters to about 1,500+ customers for their mortgage or insurance needs every year so that they can meet their financial goals. Global Finance offers more preference & freedom, with loan approvals from numerous lenders if chosen by the customers. Dealing with a large number of clients & team members, Global Finance was facing issues managing their unstructured data. As Peritos had already been managing their Dynamics environment, we successfully guided and supported Global Finance’s move from saving data from Azure Dataverse to Azure blob Storage which saved them 1500$ a month.  

https://www.globalfinance.co.nz/

Location:Auckland, Newzealand

Project Background

Global Finance has been offering smarter loans and insurance since 1999. Working as one of the best mortgage & insurance advisers in NZ, Global Finance helped clients to save on their loans, by avoiding unnecessary interest and getting mortgage-free faster. Since the beginning, they have helped some customers become mortgage-free in as little as 7 years rather than the standard 30-year term. Global Finance was already using Dyn365 and saving data from Azure Dataverse, now moving to Azure Blob Storage has optimized for storing massive amounts of unstructured data for them.

Scope & Requirement

In the 1st Phase of the Windows Virtual Server Setup, implementation was discussed as follows:

  • Setting up for saving data from Azure Dataverse to Azure Blob Storage has sustained a lot of unstructured data for Global Finance
  • Setting up the demands for storing and analyzing large volumes of unstructured data have increased over the past decade & Azure Blob Storage is one solution that fulfills the enterprise needs accurately. 

Implementation

Technology and Architecture

Technology 

The migration was deployed with the below technological component
• For Azure Dataverse-The underlying technology used was Azure SQL Database

• For Azure Blob Storage- It supported the most popular development frameworks including Java, .NET, Python & Node.js

Security & Compliance:

  • Tagging Policies
  • Azure config for compliance checks
  • NIST compliance 
  • Guardrails
  • Security Hub

Backup and Recovery

Azure Backup provided a simple, secure, cost-effective, and cloud-based backup solution to protect the business or application-critical data stored in Azure Blob in two ways- Continuous backups & Periodic backups

Network Architecture 

  • Site to Site VPN Architecture using Transit Gateway
  • Distributed Azure Network Firewall
  • Monitoring with Cloud Watch and VPC flow logs. 

Cost Optimization 

  • Alerts and notifications are configured in the Azure cost 

Code Management, Deployment

  • Cloudformation scripts for creating stacksets and scripts for generating Azure services was handed over to the client  

Challenges of Migrating from Azure Dataverse to Azure Blob Storage

  • It was a bit of a challenge to ensure the new environment after migration meets all of the compliance criteria and still remain cost effective.

Project Completion

Duration

July  2022 ~ 1 week 

Deliverables

  • Dynamics License 
  • Power App License
  • Power App per use License
  • Power App per app license 

Support for Dynamics Discounted Licensing

  • For all Licenses we implement we provide monthly billing with 20 days credit Terms. 
  • We provide Value added services by sending reports to the client on the license usage and last activity date for each user to help them manage their license cost and to get visibility 

Testimonial

  • Azure Blob Storage has a lot of organizational features that has solved the storage problem of Global Finance at a lower cost. Despite being developed for unstructured data, containers permit businesses to construct their preferred categories by uploading specific blobs to specific containers.
  • Shifting from Azure Dataverse to Azure Blob Storage has provided a free hand to Global Finance to access objects in Blob Storage via the Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage client library.
Feedback image

Now Global Finance is securely connected to Blob Storage by using SSH File Transfer Protocol (SFTP) & mount Blob Storage containers by using the Network File System 3.0 protocol. Peritos handled the Microsoft Dynamics 365 domain for Global Finance and provided discounted licensing, which proved very cost-effective. 

building Manager
Global Finace services

Next Phase

We are also  in discussion with other projects for the client

1. Dynamics CRM system Support 

2. O365 License Management 

If You Are Looking For Similar Services? Please Get In Touch

Executive Summary

About Client

The client, Yorker, is focused on leveraging technology to address the challenge of tracking and managing cricket bowlers’ net practice bowling loads. Recognizing the risk of overtraining and injuries from improper tracking, therefore, Yorker aims to provide a digital solution tailored for cricket players. In addition, An AWS Custom Application for Yorker empowers bowlers to automate session recordings, create personalized training plans, and monitor progress effectively. The app also fosters a sense of community by enabling interaction, knowledge sharing, and participation in skill-building challenges. The project is being executed in multiple phases, beginning with a Minimum Viable Product (MVP) to establish a strong foundation for future improvements. Yorker’s commitment to innovation and user-centric design reflects its dedication to transforming how athletes manage their training and optimize performance while minimizing injury risks.

Project Background - Enhancing Cricket Training through Digital Bowling Load Management

The Yorker mobile app project addresses a major challenge for cricket bowlers: accurately tracking and managing their bowling loads during net practice. Without proper tracking, bowlers risk improper training regimens, leading to overtraining and injuries. The Yorker app offers a digital solution that automates session recordings, capturing key metrics like delivery count, types of deliveries, and intensity levels. Additionally, the app allows bowlers to create personalized training plans, track progress, and receive real-time alerts to avoid overexertion. By leveraging technology, this initiative not only helps reduce injury risks but also fosters a sense of community. Bowlers can share experiences, learn from experts, and engage in skill-enhancing challenges. Ultimately, the app aims to optimize performance while ensuring bowlers train safely and efficiently, revolutionizing the way athletes manage their training.

Scope & Requirement for AWS Custom Application For Yorker

Scope: The first phase of the Yorker mobile application focuses on developing a Minimum Viable Product (MVP) to establish a strong foundation. Specifically, this phase will deliver core functionalities to allow cricket bowlers to start tracking their training sessions and managing their profiles. The scope includes:

  • User Authentication: Secure login and registration functionality for bowlers.
  • Profile Management: Basic user profile setup, including personal details and preferences.
  • Bowling Record Tracking: Automated entry for recording bowling sessions, including delivery count, types, and intensity.
  • Basic Reporting: Simple reports summarizing bowling loads to help users monitor their progress.

Requirements:

  • Mobile App Development:  We will develop the front end using React Native to ensure cross-platform compatibility on iOS and Android.
  • Backend Services: Built using .NET with RESTful APIs for data communication.
  • Database: RDS Aurora PostgreSQL for structured data storage of user profiles and bowling records.
  • CI/CD Pipeline: Set up Continuous Integration/Continuous Deployment processes for efficient development and release.
  • User Interface Design: Intuitive and user-friendly UI aligned with branding, focusing on easy data entry and report viewing.

Implementation

Technology and Architecture for AWS Custom Application For Yorker

Read more on the technology and Architecture we used for AWS Custom Application Development 

Technology
WAF, API Gateway, Lambda Functions, RDS, S3, CloudWatch, Secrets Manager

Integrations
The application leverages RESTful APIs for smooth data transfer between the front end and back end, facilitating user authentication, session tracking, and profile management. Future integrations may include cloud-based analytics and third-party push notifications to enhance user engagement.

Scalability
The app is designed to run on serverless services, allowing automatic scaling based on usage.

Cost Optimization
Peritos helped optimize costs for Yorker by designing an efficient AWS architecture using auto-scaling, right-sized instances, and serverless technologies. With tools like AWS Cost Explorer and Trusted Advisor, we continuously monitored and reduced spending. Automation through CI/CD pipelines and code optimization further enhanced performance while lowering operational costs.

Backup and Recovery
A robust backup strategy, using Amazon S3, prevents data loss, while automated recovery processes ensure quick restoration in case of failure.

Features of AWS Custom Application For Yorker

  • Automated Bowling Session Tracking
    Capture and record each bowling session, including the number of deliveries, delivery types, and intensity levels, thus providing players with a detailed log of their training activities.

  • Personalized Training Plans
    Create and customize training plans tailored to individual fitness levels and goals. Furthermore, Players and coaches can adjust these plans based on real-time performance data to optimize training regimens.

  • Progress Monitoring & Alerts
    Track progress against predefined plans, with visual dashboards and alerts to notify users of deviations that may lead to overexertion or injuries.

  • User Profile & Simple Reporting
    Maintain a personalized profile to store training history, generate basic reports on bowling performance, and gain insights to improve overall training effectiveness.

Challenges with AWS Custom Application For Yorker

  • Accurate Data Capture & Tracking
    Ensuring the app reliably records detailed bowling metrics like delivery type, count, and intensity without manual errors poses a challenge, especially in a real-time sports environment.

  • Scalability & Performance
    As user adoption grows, maintaining app performance and scalability will be critical, particularly during peak usage times. Designing a backend that can handle large volumes of data efficiently is essential.

  • User Engagement & Retention
    Encouraging consistent use of the app among bowlers can be challenging. Building features that foster community interaction, personalized plans, and gamified challenges will be crucial to retaining users.

  • Cross-Platform Compatibility
    Delivering a seamless user experience across both iOS and Android devices requires rigorous testing to address device-specific issues, screen resolutions, and performance variations.

Project Completion of AWS Custom Application For Yorker

Duration

  • Aug2024 – Oct 2024  ~ Implementation and Support
  • Oct 2024 – Present,  We are rolling out the changes production

Deliverables

  • Requirements Specification & Architectural Design Documents
    Comprehensive documentation outlining detailed project requirements, technical architecture, and system design.

  • Minimum Viable Product (MVP)
    A fully functional MVP with core features, including user authentication, profile management, automated bowling session tracking, and basic reporting.

  • Mobile Application UI/UX Design
    Intuitive and user-friendly interface designs for the app, ensuring a seamless experience on both iOS and Android devices.

  • Backend Services & APIs
    Development of scalable backend services using .NET, along with RESTful APIs for data communication between the mobile app and server.

  • CI/CD Pipeline & Deployment
    Implementation of Continuous Integration/Continuous Deployment pipelines to automate the build, testing, and deployment processes. Additionally, the initial release is deployed on cloud platforms.

Support

As part of the project implementation we provide 2 months of Ongoing extended support. Additionally, this also includes 20 hrs a month of development for minor bug fixes and a SLA to cover any system outages or high priority issues.

Testimonial

Awaited

Next Phase

We are now looking at the next phase of the project which involves:

1. Ongoing Support and adding new features every Quarter with minor bug fixes

2. Social & Community Building Features

If You Are Looking For Similar Services? Please Get In Touch

Executive Summary

About Client 

The customer’s (Tonkin + Taylor) business is involved in environmental consulting or meteorological services, focuses on providing high-resolution meteorological data for various applications, including air quality analysis, weather forecasting, and climate risk assessment. Their offerings are centered around advanced data modeling using the Weather Research Forecasting (WRF) model, which requires significant computational resources due to its ability to generate detailed meteorological datasets.

Project Background - AWS Custom product for Weather research forecasting

Peritos was hired to address these challenges by developing a comprehensive system that could:

  • Efficiently run the WRF model using HPC cluster.
  • Automatically create and manage HPC cluster jobs on receiving new data requests.
  • Automatically manage data resolution adjustments.
  • Provide a seamless experience for customers through an easy-to-use online platform.

Enable the commercialization of the datasets, ensuring that the customer could capitalize on the broad applicability of their data across multiple disciplines

Scope & Requirement

Implementation

Technology and Architecture

The architecture of this application efficiently handles the computational intensity of the WRF model, scales dynamically with demand, and provides a seamless experience for users. The integration of various AWS services ensures that the solution is robust, secure, and scalable.

Overall Workflow

User Request: Users input data parameters and request pricing. If satisfied, they proceed with the purchase.

Processing Trigger: Upon payment confirmation, the system triggers the data processing workflow.

WRF and WPS Processing: The ParallelCluster performs the necessary computations to generate the meteorological data.

Post-Processing: Any additional processing is done before the final data is stored.

Download and Notification: Users are notified and provided with a link to download their processed data.

Technology

The web app was deployed with the below technological component
• Backend Code: .NET, C#, Python
• Web App code: Nextjs 
• Database: PostgreSQL
Cloud: AWS

Integrations
• Google APIs 
• Stripe
• Auth0
• SendGrid

• Slurm APIs

Cost Optimization

Peritos enhanced Tonkin + Taylor’s FinOps capabilities by designing a cost-efficient, scalable AWS architecture. We optimized compute resources using AWS ParallelCluster, implemented serverless automation with Lambda and Step Functions, and used Amazon S3 and FSx for Lustre for cost-effective data storage. The solution allowed Tonkin + Taylor to scale on demand, reduce infrastructure costs, and gain visibility into cloud spending. This enabled efficient monetization of meteorological data while maintaining control over operational expenses.

High-Performance Computing (HPC) Environment

 • AWS ParallelCluster: Provides the compute infrastructure needed to run the WRF model and WPS processes. This cluster is set up dynamically and scaled according to the computational demands of the task, ensuring efficient resource usage.
• Head Node and Compute Fleet: The head node manages the compute fleet, which executes the high-compute WRF and WPS processes.
• FSx for Lustre: High-performance file storage integrated with the ParallelCluster, used to store and access the large datasets generated during processing.

Processing and Orchestration

AWS Lambda Functions: Used extensively for orchestrating various steps in the data processing workflow.

AWS Step Functions: Orchestrates the entire workflow by coordinating Lambda functions, managing state transitions, and handling retries or errors.

Features of Application

  • The solution leverages AWS cloud services to generate, process, and distribute high-resolution meteorological data.

  • Users interact via an interface hosted on AWS Amplify, secured by AWS WAF and Shield, with APIs managed by Amazon API Gateway.

  • The system orchestrates data processing using AWS Lambda functions and AWS Step Functions, coordinating tasks such as WRF and WPS processing on an AWS ParallelCluster.

  • FSx for Lustre provides high-performance storage, while Amazon S3 and Aurora DB handle data storage and transaction management.

  • Post-processing is done on EC2 instances, with notifications sent via SNS. The solution efficiently manages the high computational demands of the WRF model, scales dynamically, and ensures secure, seamless data access for internal and external users.

Challenges

  • Challenge 1: High Computational Demand: The WRF model’s capacity to produce highly detailed meteorological datasets necessitates extensive computational power, which made running it on the customer’s existing local infrastructure impractical. The challenge was to find a solution that could efficiently handle large-scale data generation with optimum costing.
    • Solution: This challenge was met by implementing an AWS-based high-performance computing (HPC) cluster, specifically AWS ParallelCluster, which provided the necessary computational resources to run the WRF model efficiently. The jobs on ParallelCluster were created and managed dynamically using AWS Stepfunction and AWS Lambda by utilizing Slurm APIs
  • Challenge 2: User Experience and Commercialization: To monetize their meteorological data, the customer needed to create an accessible, user-friendly portal where external users could easily select regions, adjust data resolution, and purchase datasets. The portal needed to be intuitive, efficient, and fully capable of handling secure transactions, which was essential for the success of the customer’s business model.
    • Solution: The customer addressed this challenge by developing a web-based portal using AWS Amplify, integrated with AWS WAF and Shield for security, and managed via Amazon API Gateway. This platform provided a seamless user experience, enabling external customers to effortlessly interact with the system, select their data parameters, and complete purchases, thereby facilitating the commercialization of their datasets and enhancing revenue streams.

Project Completion

Duration

  • Jan 2024  – Aug 2024  ~ Implementation and Support

Deliverables

• Setting up the AWS services Architecture review and sign off  by internal and existing vendors of Landcheck to ensure all best practices are followed and it is in alignment with best practices using AWS well Architected framework to ensure security , scalability and performance are upto the mark. 

• Custom web application was developed by the Peritos team working closely with the client’s product owner and completing any changes, bugs and adding critical features prior to Go live to ensure we have a smooth release. 

• We are still working on the handover documents and preparing for the final go Live 

Testimonial

Awaited

Next Phase

We are now looking at the next phase of the project which involves:

1. Ongoing Support and adding new features every Quarter with minor bug fixes

2. Adding support for more countries 

If You Are Looking For Similar Services? Please Get In Touch

Executive Summary

About Client

AWS Control Tower Setup

Wine-Searcher is a web search engine that helps find the price and availability of any wine, whiskey, spirit, or beer worldwide. It has been in operation since 1999 and has offices in New Zealand and the UK. In addition, They provide easy-to-use search engines, price comparison tools, an extensive database of wines and spirits, an encyclopedia, and news pages that aim to provide all “wine-finding” needs.

https://www.wine-searcher.com/
Location: New Zealand & UK

Project Background

Peritos expertly directed an AWS Control Tower setup for Winesearcher, thus optimizing their cloud infrastructure. Leveraging AWS Control Tower, the Peritos team streamlined governance and compliance, ensuring seamless scaling and enhanced security. This was needed as there were multiple different accounts the client wanted to consolidate accounts in addition to using organizations via the control tower. Additionally, Through meticulous configuration, we tailored the environment to Winesearcher’s specific needs, facilitating efficient resource management and cost control. With AWS Control Tower’s automation and governance features, Wine-Searcher gained a robust foundation for future growth, while Peritos provided invaluable expertise, empowering the company to focus on innovation and deliver an exceptional user experience in the dynamic wine market.

Scope & Requirement For AWS control tower Setup

Prerequisite: Automated pre-launch checks for your management account 

Step 1. Create your shared account email addresses 

Expectations for landing zone configuration 

Step 2. Configure and launch your landing zone 

Step 3. Then, review and set up the landing zone 

Implementation

Technology And Architecture Of AWS control tower Setup

Furthermore, read on the key components that defined the Architecture for the AWS Control Tower Setup for Wine-Searcher

Technology/ Services used

We used AWS services and helped them to setup below 

  • Cloud: AWS
  • Organization setup: Control tower 
  • AWS SSO for authentication using existing AzureAD credentials
  • Policies setup: Created AWS service control policies
  • Moreover, Templates created for using common AWS services 

Security & Compliance:

  • Tagging Policies
  • AWS config for compliance checks
  • NIST compliance 
  • Guardrails
  • Security Hub

Network Architecture 

  • Site to Site VPN Architecture using Transit Gateway
  • Distributed AWS Network Firewall
  • Monitoring with Cloud Watch and VPC flow logs. 

Backup and Recovery

  • Furthermore, Cloud systems and components used followed AWS’s well-architected framework, and the resources were all Multi-zone availability with uptime of 99.99% or more. 

Cost Optimization 

  • Alerts and notifications are configured in the AWS cost 

Code Management, Deployment

  • Cloudformation scripts for creating stack sets and scripts for generating AWS services were handed over to the client  

Challenges In Implementing AWS control tower Setup

  •  Landing Zone Drift
  • Role Drift
  • Security Hub Control Drift
  • Trusted Access disabled

Project Completion

Duration Of AWS control tower Setup Implementation

Aug 2023 to Sep 2023  ~ 4 weeks

Deliverables for AWS control tower Setup

1. Control tower implemented
AWS Control Tower is a service built with a solid architecture that can , thus, assist your organization in meeting its compliance requirements by establishing controls and implementing best practices. Moreover, third-party auditors evaluate the security and compliance of several services available in your landing zone as part of various AWS compliance programs, including SOC, PCI, FedRAMP, HIPAA, and more.

2. Business Benefits
Ensuring compliance, therefore, and implementing best practices is crucial for any organization. With our solution, you can, therefore, set up a well-architected, multi-account environment in under 30 minutes. Moreover, The creation of AWS accounts is automated with built-in governance, ensuring that the set standards and regulatory requirements are met. You can also enforce preconfigured controls to adhere to best practices. Additionally, our solution enables the seamless integration of third-party software at scale to enhance your AWS environment.

Support

  • 1 month extended support 
  • A template for Cloud formation stack to create more AWS resources using the available stacks
  • In addition, Screen sharing sessions with demo of how the services and new workloads can be deployed. 

Testimonial

Awaited

Next Phase

If You Are Looking For Similar Services? Please Get In Touch

Executive Summary

About Client

Managing AWS Environment 

Wine-Searcher is a web search engine that helps find the price and availability of any wine, whiskey, spirit, or beer worldwide. It has been in operation since 1999 and has offices in New Zealand and the UK. They provide an easy-to-use search engine, price comparison tools, an extensive database of wines and spirits, an encyclopedia, and news pages that aim to provide all “wine-finding” needs.

https://www.wine-searcher.com/
Location: New Zealand & UK

Project Background

As part of their plan to launch a full suite of digital products, Wine-Searcher chose AWS as their cloud environment. Strategic resource allocation and cost optimization are critical to ensure a cost-effective operation. Peritos helped as the reliable AWS partner on AWS Cost Explorer and AWS Budgets, like valuable tools for implementing ongoing discounted billing. Furthermore, leveraging reserved instances and spot instances and optimizing usage based on peak hours and demand patterns can result in significant cost savings. Experts from the Peritos team helped regularly monitor and fine-tune the AWS environment based on Winesearcher’s needs, allowing for continuous optimization while adhering to budgetary constraints and maintaining the required scalability and performance for their operations.

Scope & Requirement for Managing AWS Environment

In the 1st Phase of the AWS Environment Setup, implementation was discussed as follows:

  • Manage Billing  
  • Value added services  
  • Handling Complex environments   
  • Multiple AWS invoices   
  • Cost Optimization 
  • Cloud security optimization 

Implementation

Technology and Architecture of Managing AWS Environment

Furthermore, Read on the key components that defined the Architecture for managing the AWS Environment Setup for Wine-Searcher

Technology/ Services used

We used AWS services and helped them to setup below 

  • Cloud: AWS
  • Organization setup: Control tower 
  • AWS SSO for authentication using existing AzureAD credentials
  • Policies setup: Created AWS service control policies
  • Templates created for using common AWS services 

Security & Compliance:

  • Tagging Policies
  • AWS config for compliance checks
  • NIST compliance 
  • Guardrails
  • Security Hub

Network Architecture 

  • Site to Site VPN Architecture using Transit Gateway
  • Distributed AWS Network Firewall
  • Monitoring with Cloud Watch and VPC flow logs. 

Backup and Recovery

  • Cloud systems and components used followed AWS’s well-architected framework, and the resources were all Multi-zone availability with uptime of 99.99% or more. 

Cost Optimization 

  • Alerts and notifications are configured in the AWS cost 

Code Management, Deployment

  • Cloudformation scripts for creating stack sets and scripts for generating AWS services were handed over to the client  

Challenges in Implementing Managing AWS Environment

  • Collate all accounts together 
  • Understand and agree on how the account would be managed under the distribution model  

Project Completion

Duration of Managing AWS Environment Implementation

1st Sep 2021 to Current

Deliverables for Managing AWS Environment

  • Collate all accounts under the dsitrubution ECAM model  
  • Manage billing 
  • Provide support services as needed 
  • Ongoing discounted licensing  

Support

  • One month of extended support 
  • A template for Cloud formation stack to create more AWS resources using the available stacks
  • Screen-sharing sessions with demos of how the services and new workloads can be deployed. 

Testimonial

Awaited

Next Phase

We are now looking at the next phase of the project, which involves:

1. Implementing a control tower for the client.  

If You Are Looking For Similar Services? Please Get In Touch

Executive Summary

About Client

AWS Compute & High-performance Computing

Tonkin + Taylor is New Zealand’s leading environment and engineering consultancy with offices located globally. They shape interfaces between people and the environment, which includes earth, water, and air. Additionally, They have won awards like the Beaton Client Choice Award for Best Provider to Government and Community-2022 and the IPWEA Award for Excellence in Water Projects for the Papakura Water Treatment Plan- 2021.

https://www.tonkintaylor.co.nz/
Location: New Zealand

Project Background

Tonkin + Taylor were embarking on launching a full suite of digital products and zeroed upon AWS as their choice for a cloud environment. Moreover, They wanted to accelerate their digital transformation and add more excellent business value through AWS Development Environment best practices. To achieve all this, we needed to configure AWS Compute & High-Performance Computing, following best practices and meeting compliance standards, which can serve as a foundation for implementing more applications. Furthermore, The AWS Lake House is a central data hub that consolidates data from various sources and caters to all applications and users. It can quickly identify and integrate any data source. The data goes through a meticulous 3-stage refining process: Landing, Raw, and Transformed. Additionally, After the refinement process, it is added to the data catalog and is readily available for consumption through a relational database.

Scope & Requirement for AWS Compute & High Performance Computing

The 1st Phase of the AWS Environment Setup discussed implementation as follows:

  • Implement Data Lakehouse on AWS

Implementation

Technology and Architecture of AWS Compute & High Performance Computing

Read more on the key components that defined the Implementation of Data Lakehouse on AWS for Tonkin + Taylor

Technology/ Services used

We used AWS services and helped them to setup below 

  • Cloud: AWS
  • Organization setup: Control tower 
  • AWS SSO for authentication using existing AzureAD credentials
  • Policies setup: Created AWS service control policies
  • Templates created for using common AWS services 

Security & Compliance:

  • Tagging Policies
  • AWS config for compliance checks
  • NIST compliance 
  • Guardrails
  • Security Hub

Network Architecture 

  • Site to Site VPN Architecture using Transit Gateway
  • Distributed AWS Network Firewall
  • Monitoring with Cloud Watch and VPC flow logs. 

Backup and Recovery

  • Cloud systems and components used followed AWS’s well-Architected framework and the resources were all Multi-zone availability with uptime of 99.99% or more. 

Cost Optimization 

  • Alerts and notifications are configured in the AWS cost 

Code Management, Deployment

  • Cloudformation scripts for creating stacksets and scripts for generating AWS services was handed over to the client  

AWS Compute & High Performance Computing Challenges & Solutions

  • Diverse data sources- Data Analytics and cleaning up and integration patterns to pull data from different data sources 

  • On-premise data connection to data lake migration- Site-to-site Secure AWS connection was implemented  

  • Templatized format for creating pipelines- Created scripts of specific format, Deployment scripts, and CI CD scripts  

Project Completion

Duration of AWS Compute & High Performance Computing

Apr 2023 to July 2023  ~ 4  months

Deliverables for AWS Compute & High Performance Computing

  • Create scripts to create and deploy pipelines 
  • Implement Data Lakehouse  

Support

  • Providing ongoing support as we are a dedicated development partner for the client  

Testimonial

After we setup and enabled client to start using the newly built environment they were eager to get apps being rolled out using cloud resources. It was exciting to see client using the environment extensively. We also took Feedback from stakeholders as below:

Feedback image
Santosh Dixit
Digitization delivery lead

Next Phase

We are now looking at the next phase of the project, which involves:

  1. API and file-based data sources to be added  
  2. Process data to be used in different applications for ingesting in other applications  

If You Are Looking For Similar Services? Please Get In Touch

Executive Summary

About Client

ABDM-Compliant Hospital Management Software for all-size hospitals.

 
Ekanshi Solutions Pvt Ltd offers expert management consultation services to healthcare organizations. They provide strategic guidance and support to help organizations achieve their goals. With the in-depth expertise and industry knowledge, they help organizations optimize their operations, make informed decisions, and achieve excellence in patient care.

 

https://ekanshisolutions.com/
Location: Lucknow, Uttar Pradesh, India

Project Background

Ekanshi Solution requires reviewing its clients’ hospitals and clinics to ensure they meet the compliance requirements. To achieve this, we recommended developing a software solution that meets the basic compliance requirements and also eases the operational burden on hospitals.

  • Registration and demographic data collection.
  • Patient history and medical record management.
  • Appointment scheduling and reminders.
  • Patient check-in and waiting list management.
  • ABDM Compliance M1  , M2 and M3 scenarios therefore create Verify ABHA and to manage patients records
  • The movement of this on-premise app to a cloud-based infrastructure is aimed at improving performance, ensuring data security, and enabling seamless integrations with other digital health services.
  • AWS Automated HIPPA Compliance check and aligned with best practices. 

Scope & Requirement for ABDM-Compliant Hospital Management Software

In the 1st Phase of custom application development, we discussed the implementation as follows:

  • A customized app, furthermore, helps to generate ABHA ID and integrates ABDM-compliant APIS 
  • The client hospital team should be able to view patient records easily and receive and send to the central server
  • Able to book appointments and moreover schedule reminders easily. 
  • We would create a Web version of the app to help manage the above functionality, which will replace the current paper-based and unorganized work the admin was doing. 
  • Plan and execute the migration of application code, data, and databases from the on-premise system to the selected cloud platform.
  • Ensure minimal downtime by utilizing cloud migration tools and strategies, such as database replication, to synchronize on-premise data with the cloud.
  • Compliance with HIPPA and using config rules to do ongoing monitoring of compliance 

Implementation

Technology and Architecture of Hospital Management Software

Read more on the technology and Architecture we used for AWS Custom Application Development using ESRI ArcGIS.

Technology/ Services used

The web app was deployed with the below technological component

  • Backend Code: .NET Core, C#
  •  Web App code: AngularJS
  • Database: PostgreSQL
  • Cloud: AWS

Integrations:

  • Google APIs 

  •  ABDM Integration

  • Auth0

  • SendGrid

Security:
  • AWS WAF service is used for the firewall
  • All API endpoints are token-based

Scalability

  • The application is designed to be running on serverless services so that it can easily scale up and down automatically based on usage. 

Backup and Recovery

  • Additionally, Automated backups are configured to backup the database and store multiple copies of the backup. 

Cost Optimization 

  • Peritos optimized costs for the ABDM-compliant hospital management software by architecting a scalable, cloud-based solution using serverless components and right-sized infrastructure.
  • Leveraging AWS tools like Cost Explorer and Trusted Advisor, we continuously monitored usage to eliminate waste and reduce expenses.
  • Automation through CI/CD pipelines, along with performance-tuned code and databases, ensured reliable delivery while minimizing operational overhead—resulting in a cost-effective, high-performance system for healthcare providers.

Code Management, Deployment

  • CI/CD is implemented to automatically build and deploy any code changes

Features of the Application

  • Integrated Patient Profile with NDHM: This application seamlessly integrates with NDHM, enabling the swift creation of ABHA IDs and facilitating the exchange of patient health data. By interfacing with the National Digital Health Mission, the system ensures that patient data is standardized, up-to-date, and easily accessible, fostering more informed medical decisions.
  • Multi-tenancy Architecture: The system’s ability to cater to multiple hospitals or health providers under a single unified platform is a significant advantage. Each hospital can manage its operations while benefiting from centralized updates and features, ensuring scalability and simplifying administrative tasks.
  • Data Encryption at Rest and In Transit: Implemented encryption using AWS Key Management Service (KMS) for both data at rest (S3, EBS, RDS) and in transit (SSL/TLS) to ensure compliance with GDPR and HIPAA requirements for securing sensitive data.
  • Identity and Access Management (IAM): Designed and enforced strict least-privilege access policies using AWS IAM. This included creating custom roles and policies with granular permissions for specific users and services, ensuring only authorized personnel had access to sensitive data.
  • AWS Config and Compliance Rules: Set up AWS Config to track and audit configuration changes across the environment. Applied AWS Config Rules to continuously monitor compliance against GDPR and HIPAA requirements, such as encryption enabled on S3 buckets and logging for API Gateway and Lambda.
  • Audit Logging and Monitoring: Configured AWS CloudTrail and Amazon CloudWatch for continuous logging and monitoring of API calls, changes, and actions within the AWS environment. This was crucial for meeting HIPAA requirements for audit trails and GDPR’s data access visibility.
  • VPC Flow Logs and Security Groups: Deployed Virtual Private Cloud (VPC) with properly configured flow logs to monitor and log network traffic. Used AWS Security Groups and Network ACLs to ensure secure network segmentation and prevent unauthorized access to sensitive resources.
  • Data Residency and Data Transfer Controls: Implemented controls to ensure data residency compliance by restricting data storage and processing to specific AWS regions as required by GDPR. Utilized VPC endpoints and AWS Direct Connect to secure data transfers and reduce the exposure to the public internet.
  • Backup and Disaster Recovery: Designed an automated backup strategy using AWS Backup to meet GDPR’s requirement for data recoverability, ensuring regular snapshots of critical databases (e.g., RDS, DynamoDB) and storing them in encrypted S3 buckets across different regions for redundancy.

Challenges in implementing ABDM Compliant Hospital Management Software

  • Integration with ABDM APIs is needed to achieve compliance; however, the API documentation was not up to date. Also, the API versions keep checking. During the app development from v1 to V3, we had to reach the APIs and perform code refactoring to ensure the utilization of the latest set of APIs.
    • Furthermore, Help from PWC team was provided and explained the API endpoints and the test scenarios to cover to ensure the app compliance checks can be passed. 
  • Testing of the application with multiple end users who were experts in their domain was a challenge.
    • We found the data quite complicated to understand and relied on the client’s team to test and inform us about the expected result in case of any issues. Additionally, we identified key users such as doctors, administrators, nurses, department heads, etc., to ensure coverage of all user scenarios.
  • Given the sensitive nature of medical data, ensuring robust security measures against breaches and unauthorized access is paramount.
    • The hospital management application ensured data security and privacy through end-to-end encryption for both data at rest and in transit. AWS’s suite of security tools, including IAM for access control, KMS for key management, and VPCs for network isolation, were leveraged. We fortified the APIs with security tokens and rate limiting and conducted regular training sessions for staff on security best practices.

Project Completion

Duration of Hospital Management Software Implementation

Jan 2023 – Dec 2023 ~ 1 year   months 1st Version 

@nd Version- Jan 2024 – Present Currently working on Reporting, Enhancements, and Billing , In patient and Out patient feature addition along with M2 Billing

Deliverables for ABDM-Compliant Hospital Management Software

Setting up the AWS environment for the client system

• Custom web application for two environments production and UAT system 

• We delivered the features as agreed in the scope 

  1. Registration and demographic data collection.
  2. Patient history and, furthermore, medical record management.
  3. Appointment scheduling and reminders.
  4. Patient check-in and waiting list management.
  5. ABDM Compliance M1, M2, and M3 scenarios to create Verify ABHA and to manage patients records
  6. HIPPA compliance report for managing workloads and following best practices for HIPPA and also ongoing monitoring report. 
  7. We developed the following set of core features. 
    User: Manages user registration, authentication, roles, and permissions.
    Hospital: Multi-tenant application to handle hospital registration, department management, and related configurations.
    Doctor: Manages doctor profiles, availability, specialties, and associated scheduling.
    Patient: In addition, Interfaces with ABDM for patient data operations, ABHA ID creation, and retrieval of patient health history.

Support

  • As part of the project implementation we provide 2 months of Ongoing extended support.
  • This also includes 20 hrs a month of development for minor bug fixes and an SLA to cover any system outages or high-priority issues.

Testimonial

After working for 6 months on the project, we took feedback from the Product owner whom we have worked closely for project execution:

Feedback image

Peritos and using AWS have been instrumental in transforming our hospital’s operations for clients. It empowered us to create a custom multi-tenant application that not only meets our current needs but also positions us for future growth and innovation to showcase this to our larger client base and prospects. With a solid system now, we have the confidence to continue our mission of providing exceptional healthcare services to our community, knowing that our technology backbone is secure, reliable, and ready to scale. Additionally, We are happy with the services and look forward to completing more projects in the future with Peritos team.

Akanksha Niranjan
OWNER, EKANSHI SOLUTIONS

Next Phase

We are now looking at the next phase of the project which involves:

1. Furthermore, Ongoing Support and adding new features every Quarter with minor bug fixes

2. Electronic Medical Records (EMR) Integration: Incorporate a system that not only stores patient data but also tracks their entire medical history, including medications, allergies etc

3. AI-Powered Predictive Analysis: Moreover, Use AI and machine learning to analyze patient data for potential health risks, helping doctors make informed decisions

If You Are Looking For Similar Services? Please Get In Touch

Executive Summary

About Client 

AmityWA empowers participants and provides them with support and assistance that build their skills and independence. Many times, it is difficult to manage all the data and keep track of each participant. So, AmityWA felt a need for a server that can provide a mechanism for programmatic control of the SAP support Services-Azure Remote Server Setup and its associated hosted virtual machines.

 

https://amitywa.com.au/

Location: Perth, Australia 

Project Background

 Peritos assisted AmityWA to streamline all the servers and explained how to address its data challenges by leveraging SAP Support Services-Azure Remote Server Setup. Peritos offered discounted licensing and guided how to save ongoing costs. 

Scope & Requirement

Below are some of the scope items 

  • We implemented the Azure Windows Server Support with which we also offered them the licensing support at discounted price. 
  • Assist the Best Practices for Azure License Management
  • Recommend a saving plan
  • 1 month Free Trial and extended 2 Month trial as a partner
  • Understand the best licensing needed for the company 
  • Handover and questions

Implementation

Technology and Architecture

Technology 

  • SAP Support Services-Azure Remote Server Setup
  • Team Member License

Integrations

  • NA 

Security:

  • Comes with MIcrosoft Standard Support and Security Cover

Backup and Recovery

  • Microsoft Production Support 
  • 1 Live Environment and 2 Sandbox Environment with Multiple companies that can be createdW

 

Scalability

As applicable with the standard Service

Cost Optimization 

Explained to the client how the cost could be saved for monthly and Annual commitment orders 

Mixed License for Team Member and discounted licensing that aided in cost savings 

Code Management, Deployment

Deployment for the SAP Support Services-Azure Remote Server Setup was done via Peritos 

 

Challenges of SAP Support Services-Azure Remote Server Setup

AmityWA aims to be a provider of services that empower NDIS participants toward their goals. While catering to all participants’ needs, it is important that data is secure for each participant and can be accessed by AmityWA. For that, they required a virtual server that can help achieve faster and easier backup & recovery of key application workloads. Operating through a virtual server provided the liberty to work faster provisioning applications and resources, improving disaster recovery & business continuity & minimizing or eliminating downtime. 

Project Completion

Duration

Oct  2021 ~ 2 working days 

Deliverables

  • Delivered a Web App which smoothly automates and manages the leads for each of AttentionSeeker clients ensuring the data gets stored in the app and the client can see the progress made by the Attention Seeker team thus saving time to have frequent calls to update them on the progress. 
  • A customized design of the app was discussed with the client
  • Training and handover in using the app and explaing how the users and other clients can be easily onboarded. 

Support for Setting Up Azure Windows Virtual Server

  • For all Licenses we implement we provide month end billing with 20 days credit Terms. 
  • We provide Value added services by sending reports to the client on the license usage and last activity date for each user to help them manage their license cost and to get visibility 

Testimonial

Awaited

Next Phase

We are also  working with clients on other projects 

1. Support for Dynamics system

2. Helping with Integration of Business Central with Ecommerce online Stores Amazon, WooCommerce etc. 

If You Are Looking For Similar Services? Please Get In Touch

Executive Summary

About Client

Electric Kiwi is an autonomous online New Zealand electricity retailer. Established in 2014, Electric Kiwi uses cloud computing and intelligent meters to service customers nationwide in major urban areas. Its services depend on the presence of the smart meters in the customer’s home. For the client, we did the POC for integrating AWS Support Services with Datadog’s CloudFormation template.

http://www.electrickiwi.co.nz/

Location: New Zealand

Project Background

The case study of the project provides a synopsis of how we did the POC for the client for integrating AWS Support Services, an AWS account with Datadog employing Datadog’s CloudFormation template. Creating an IAM role and associated policy enables Datadog’s AWS account to make API calls for collecting or pushing data into your AWS account. Using the CloudFormation template supplies all the tools required to send this data to your Datadog account, and Datadog helps the CloudFormation template to provide the latest functionality. The template even deploys the Datadog Forwarder Lambda operation for sending logs to Datadog.

After setting the initial connection, enable the individual AWS service integrations appropriate to your AWS domain. With a single click, Datadog provisions the required resources in your AWS account and starts analyzing the metrics and events for the services you use. For general AWS services you are using, Datadog needs out-of-the-box dashboards providing prompt and customizable visibility. The project background demonstrates setting up the integration and establishing the Datadog Agent on an Amazon Linux EC2 instance and feeds a broad outline of the integration’s capabilities. Visit the Enable integrations for individual AWS service factions for a list of the general sub-integrations.

This procedure can be repeated for multiple AWS accounts as required, or you can also use the API, AWS CLI, or Terraform to set up various accounts simultaneously. For more details, read the Datadog-Amazon CloudFormation guide.

Scope & Requirement

The scope & requirements involved the following:

  • Datadog integration with AWS support services to measure and accomplish observability matrix 
  • Identified costing  
  • Updated pricing with partner discount as applicable  
  • Shortlisted services for which we did POC  
  • EC2, database Monitor Requirement, Steps, and Dashboard View  
  • Along with pricing and cost for using Datadog; for monitoring and analytical purpose

Implementation

Technology and Architecture

Technology/Services used 

We used Datadog integration with AWS services to masure and do observability matrix and helped them to setup below 

  • Cloud: AWS
  • Shortlisted services for which we did POC
  • Organization setup: Control tower 
  • Policies setup: Created AWS service control policies
  • Templates created for using common AWS services 

Security & Compliance:

  • Tagging Policies
  • Azure config for compliance checks
  • NIST compliance 
  • Guardrails
  • Security Hub

Backup and Recovery

AWS Backup provided a simple, secure, cost-effective, and cloud-based backup solution which was already implemented for the client

Network Architecture 

  • Site to Site VPN Architecture using Transit Gateway
  • Distributed Azure Network Firewall
  • Monitoring with Cloud Watch and VPC flow logs. 

Cost Optimization 

  • Alerts and notifications are configured in the AWS cost 
  • Identified Costing
  • Updated pricing with partner discount as applicable

Code Management, Deployment

  • Cloudformation scripts for creating stacksets and scripts for generating AWS services was handed over to the client  

Challenges

  • Access was not provided to the actual environment and setting up 
  • Complex environment setup and monitoring for production cost was only possible to evaluate if we have got the access for the actual environment from the client

Project Completion

Duration

15th May to 15th June 2022 ~ 1 month

Deliverables

  • AWS setup and architecture design and document   

Support

  • 1 month of extended support
  • A template for Cloud formation stack to create more AWS resources using the available stacks
  • Screen-sharing sessions with a demo of how the services and new workloads can be deployed.

Testimonial

Feedback image

We sought the services of Peritos Solutions to help with our Observability objectives. Peritos took the team through a phased process of uncovering exactly what we needed from observability. Through this discovery, we can understand that the solution that we thought we should be implementing didn’t fully match our requirements.  Without the work from Peritos we would have implemented something not fit for purpose and we are now in a great place to re-evaluate our objectives and requirements and make a more informed decision. The work Peritos performed was top notch and we look forward to more engagements with them.

Matt Kardos
Enterprise Architect

Next Phase

No new work identified  

If You Are Looking For Similar Services? Please Get In Touch