Friday, November 4, 2022

Installing New relic PHP agent in AWS EC2 ARM64 Graviton family

 

Installing New Relic PHP agent is very simple. Once we upgraded the instance to the AWS EC2 arm64 family, we were unable to install php agent using the usual steps. When we dug around, we found the following link, which solved our issue.

https://docs.newrelic.com/docs/apm/agents/php-agent/installation/php-agent-installation-arm64/

Following the steps that we followed to install in the Amazon linux 2 instance I hope it will be helpful for others.

1.       SSH to an EC2 instance

2.       In the root folder, create a new folder called “newrelic”

mkdir newrelic

3.       Get into the newly created folder

cd newrelic

4.       As per above mentioned link new relic php agent code is available in the following github path

 https://github.com/newrelic/newrelic-php-agent

Clone new relic php agent code to compile. Run the following command in the command prompt
 git colne php agent to compile for arm64

5.       Once the code is cloned, a new folder called “newrelic-php-agent” will be created in the cloned folder. Get into that folder to compile the source code

cd newrelic-php-agent/

 

6.       Follow the following steps to compile and install the new relic php agent

a.       Run sudo make all in the newrelic-php-agent folder to build. it will take a few minutes

b.       Once it is compiled, run sudo make agent-install

c.       Run the following command to create log folder

sudo mkdir /var/log/newrelic

d.       Run the following command to give full permission to the log folder

sudo chmod 777 /var/log/newrelic

e.       Run the following command to copy deamon file to the appropriate folder

sudo cp bin/daemon /usr/bin/newrelic-daemon 

 

7.        Copy newrelic.ini file to php ini path. In amazon linux, additional php ini are stored in /etc/php.d/

a.       To find out ini file path location. Run following command

To find additional ini file path:  php --ini | grep 'additional'

To find ini file path:  php --ini | grep ' Configuration File'

To copy newrelic.ini file to the appropriate path run the following command

sudo cp agent/scripts/newrelic.ini.template /etc/php.d/30-newrelic.ini

8.       Once newrelic.ini copied, edit the file and change the  following parameters

a.       To edit file, run the following command

vi /etc/php.d/30-newrelic.ini

b.       Update the following parameters and save

Add a valid license key to the line newrelic.license = "INSERT_YOUR_LICENSE_KEY".          

Change the application name that's shown in one.newrelic.com on the line newrelic.appname = "PHP Application"

 

9.       Once everything is done, run the following command to restart php,httpd servers and start new relic service

sudo systemctl restart httpd.service

sudo systemctl restart php-fpm

/usr/bin/newrelic-daemon start

 

 The above steps will be helpful to install the new relic php agent. If you encounter any issue, the following link will be helpful to troubleshoot

https://discuss.newrelic.com/t/php-troubleshooting-framework-install/108683

Tuesday, September 13, 2022

Disaster Recovery Plan

 

This is one of the least focused areas in many businesses, and when disaster strikes, businesses often struggle to recover. Every organisation should have a BCP team headed by senior management to make sure to manage various disasters. If you take disaster, there are two types of disaster.

·         Natural Disaster Example of natural disasters are earthquake, flood, storms etc

·         Human Made Disaster Example of this are vulnerability attack, power outage, fire, etc

Any disaster will impact the availability of the system. Sometimes human-made disasters like vulnerability attacks on parts of the system will impact the integrity of the system. This availability and integrity are part of the CIA triad (confidentiality, integrity, and availability).

A primary goal for DR solutions is to increase the high availability and decrease the single point of failure (SPOF), thus building strong system resilience and fault tolerance.

·         A single point of failure (SPOF) is any component that can cause an entire system to fail

·         System resilience refers to the ability of a system to maintain an acceptable level of service during an adverse event

·         Fault tolerance is the ability of a system to suffer a fault but continue to operate

·         High availability is the use of redundant technology components to allow a system to quickly recover from a failure after experiencing a brief disruption

As part of business impact analysis, the BCP team will arrived RPO, RTP, MTD and WRT based on business risk and criticality. Based on these values DR solution will be designed. For example, BCP team committed MTD (Maximum Tolerable Downtime) value for customer service is 4 hours, DR system needs to build around that to achieve. BCP team might not be aware of technology to achieve, and they will arrive this value based on business impact.  MTD figure will communicate to get the production environment back to normal after a disruptive event.

 


 

RPO (Recovery Point Objective): This represents the amount of data loss at the time of recovery. Based on the schedule, the database is backed up. Once after a disaster, the difference between the last backup datetime and the disaster datetime is called RPO. The RPO value indicates the earliest backup available to recover. Lower RPO represents the high frequency of data backup. For example, if our RPO rate is 5 minutes, our backup data must be refreshed every 5 minutes to meet the RPO requirement.

RTO (Recover Time Objective): This represents the maximum amount of time it will take to restore the environment after a disaster. During this time, the environment is completely down and not available for production. Lower RPO will have an automated redundant build with the system, while higher RPO will require more manual intervention to get up and running.

WRT(Work Recovery Time): This represents, once the system is up and running, the maximum time it will take to confirm the integrity of the system. RTO deals with getting the environment up and running, and WRK will make sure the application is ready to be shared with business users to start working.

MTD (Maximum Tolerable Downtime): This represents the maximum time business can use to tolerate the system's unavailability. The total downtime consists of RPO+WKT. Based on business criticality, MTD time will change with the business.

The BCP team will identify MTD and RPO based on business requirements, and the DR team will build the system to achieve them. Lower RTO and MTD will require higher costs to achieve, and higher RPO and MTD will require lower costs to achieve. As part of our building recovery strategy, we mostly experience the following three types of disruptions:

·         Nondisater means this will have a significant impact on service and a limited impact on the facility. The solution to this issue may include system or software restoration.

·         Disaster means this will have a significant impact to server and facility. To address this issue we need to have alternative facilities. Restore software and hardware in the alternative facility to overcome the disruption, and an alternative environment should be available until the problem in the main facility is resolved.

·         Catastrophes have a significant impact on facilities, necessitating both short-term and long-term solutions to rebuild the facility. 

Building a recovery site is major part of DR solution. Recovery site will help us recover from disaster. Following is popular site to manage nay disaster

·         Cold Site: Just facility will be available to restore. It will take a week to get up and running

·         Warm Site: Minimal configuration will be available. If there is any disaster it will take few days to up and running

·         Hot Site: Fully configured redundant site will be available to run in few hours.

 



Thursday, September 1, 2022

Business continuity plan

 

Often, we think business continuity plans (BCP) and disaster recovery plans (DRP) are the same. In the reality, these two are not the same. DR is a subset of BCP and focuses on how to recover once a disaster has struck. BCP is at strategy level, it will talk about plans for business continuity if there is any disaster.

Business continuity management (BCM) is a holistic management process to handle both BCP and DRP. BCM provides a framework for integrating resilience with the capability for effective responses in a manner that protects the interests of the organization’s key stakeholders. The main objective of BCM is to allow the organisation to continue to perform business operations under various conditions. BCM is the main approach to managing all aspects of BCP and DRP.

 



The following are a few widely used industrial standards and frameworks that are available for BCP.

·         ISO/IEC 27031:2011: describes the concepts and principles of information and communication technology (ICT) readiness for business continuity

·         ISO 22301:2019 Security and resilience — Business continuity management systems — Requirements

·         NIST outlines the following steps in SP 800-34

 

BCP helps organizations achieve

·         Appropriate response to emergency situations

·         Ensure safety

·         Reduced business impact

·         Resume critical business functions.

·         plan to work with the vendor for DR

·         Reduce confusion during a crisis

·         increase customer confidence.

·         It is up and running quickly after a disaster.

 

NIST SP 800-34 outlined following steps



 

Business continuity planning (BCP) entails assessing organisational risks and developing policies, plans, and procedures to mitigate their impact if they occur. The BCP focused on how to keep the organisation in business after a major disruption takes place. It is about the survivability of the organisation and making sure that critical functions can still take place even after a disaster. The goal of BCP planners is to implement a combination of policies, procedures, and processes such that a potentially disruptive event has as little impact on the business as possible. The BCP process has four major steps.

·         Project scope and planning

·         Business impact analysis

·         Continuity planning

·         Approval and implementation

 




Wednesday, August 24, 2022

RDS to Aurora data source migration in AWS QuickSight

 

Recently, we came across a problem when we migrated our existing RDS to Aurora. We have already developed a lot of QuickSight reports using RDS. Once we migrated to Aurora, there was no direct way to change the data source connection from RDS to Aurora.

Data source editing using the AWS console is allowed to change the instance id, username, and password.

The following is the step to edit the data source connection using the AWS console.

  1. Select Datasets on the left side of the QuickSight, and then click the New Dataset button in the top right corner.
  2. Scroll down to the FROM EXISTING DATA SOURCES section and select a data source.
  3. In the popup, click Edit Data Source.
  4. Change the required details like instance ID, username, and password.
  5. Click "Validate connection."
  6. If the connection validates, click Update data source.

 

The above steps will help you to update your RDS connection. If you want to change the connection from RDS to Aurora or any other connection, you can use the following AWS CLI commands.

Step 1: We wanted to know the data source id to edit via AWS CLI. To know the data source ID, run the following command in the command line. which will list all the available data sources along with their data source id

aws quicksight list-data-sources --aws-account-id <<account id> --region <<Region>>

Step 2 : Generate an update skeleton using the following AWS CLI command.

aws quicksight update-data-source --generate-cli-skeleton input > edit-data-source.json 

JSON contains following content which includes all the section

"{
    "AwsAccountId": "",
    "DataSourceId": "",
    "Name": "",
    "DataSourceParameters": {
        "AmazonElasticsearchParameters": {
            "Domain": ""
        },
        "AthenaParameters": {
            "WorkGroup": ""
        },
        "AuroraParameters": {
            "Host": "",
            "Port": 0,
            "Database": ""
        },
        "AuroraPostgreSqlParameters": {
            "Host": "",
            "Port": 0,
            "Database": ""
        },
        "AwsIotAnalyticsParameters": {
            "DataSetName": ""
        },
        "JiraParameters": {
            "SiteBaseUrl": ""
        },
        "MariaDbParameters": {
            "Host": "",
            "Port": 0,
            "Database": ""
        },
        "MySqlParameters": {
            "Host": "",
            "Port": 0,
            "Database": ""
        },
        "OracleParameters": {
            "Host": "",
            "Port": 0,
            "Database": ""
        },
        "PostgreSqlParameters": {
            "Host": "",
            "Port": 0,
            "Database": ""
        },
        "PrestoParameters": {
            "Host": "",
            "Port": 0,
            "Catalog": ""
        },
        "RdsParameters": {
            "InstanceId": "",
            "Database": ""
        },
        "RedshiftParameters": {
            "Host": "",
            "Port": 0,
            "Database": "",
            "ClusterId": ""
        },
        "S3Parameters": {
            "ManifestFileLocation": {
                "Bucket": "",
                "Key": ""
            }
        },
        "ServiceNowParameters": {
            "SiteBaseUrl": ""
        },
        "SnowflakeParameters": {
            "Host": "",
            "Database": "",
            "Warehouse": ""
        },
        "SparkParameters": {
            "Host": "",
            "Port": 0
        },
        "SqlServerParameters": {
            "Host": "",
            "Port": 0,
            "Database": ""
        },
        "TeradataParameters": {
            "Host": "",
            "Port": 0,
            "Database": ""
        },
        "TwitterParameters": {
            "Query": "",
            "MaxRows": 0
        },
        "AmazonOpenSearchParameters": {
            "Domain": ""
        },
        "ExasolParameters": {
            "Host": "",
            "Port": 0
        }
    },
    "Credentials": {
        "CredentialPair": {
            "Username": "",
            "Password": "",
            "AlternateDataSourceParameters": [
                {
                    "AmazonElasticsearchParameters": {
                        "Domain": ""
                    },
                    "AthenaParameters": {
                        "WorkGroup": ""
                    },
                    "AuroraParameters": {
                        "Host": "",
                        "Port": 0,
                        "Database": ""
                    },
                    "AuroraPostgreSqlParameters": {
                        "Host": "",
                        "Port": 0,
                        "Database": ""
                    },
                    "AwsIotAnalyticsParameters": {
                        "DataSetName": ""
                    },
                    "JiraParameters": {
                        "SiteBaseUrl": ""
                    },
                    "MariaDbParameters": {
                        "Host": "",
                        "Port": 0,
                        "Database": ""
                    },
                    "MySqlParameters": {
                        "Host": "",
                        "Port": 0,
                        "Database": ""
                    },
                    "OracleParameters": {
                        "Host": "",
                        "Port": 0,
                        "Database": ""
                    },
                    "PostgreSqlParameters": {
                        "Host": "",
                        "Port": 0,
                        "Database": ""
                    },
                    "PrestoParameters": {
                        "Host": "",
                        "Port": 0,
                        "Catalog": ""
                    },
                    "RdsParameters": {
                        "InstanceId": "",
                        "Database": ""
                    },
                    "RedshiftParameters": {
                        "Host": "",
                        "Port": 0,
                        "Database": "",
                        "ClusterId": ""
                    },
                    "S3Parameters": {
                        "ManifestFileLocation": {
                            "Bucket": "",
                            "Key": ""
                        }
                    },
                    "ServiceNowParameters": {
                        "SiteBaseUrl": ""
                    },
                    "SnowflakeParameters": {
                        "Host": "",
                        "Database": "",
                        "Warehouse": ""
                    },
                    "SparkParameters": {
                        "Host": "",
                        "Port": 0
                    },
                    "SqlServerParameters": {
                        "Host": "",
                        "Port": 0,
                        "Database": ""
                    },
                    "TeradataParameters": {
                        "Host": "",
                        "Port": 0,
                        "Database": ""
                    },
                    "TwitterParameters": {
                        "Query": "",
                        "MaxRows": 0
                    },
                    "AmazonOpenSearchParameters": {
                        "Domain": ""
                    },
                    "ExasolParameters": {
                        "Host": "",
                        "Port": 0
                    }
                }
            ]
        },
        "CopySourceArn": ""
    },
    "VpcConnectionProperties": {
        "VpcConnectionArn": ""
    },
    "SslProperties": {
        "DisableSsl": true
    }
}"

Step 3: Make modifications to the JSON file.Run the following command to update the connection:

aws quicksight update-data-source --cli-input-json file:// edit-data-source.json --region <<Region>>

Example JSON to change data source connection to Aurora

{

    "AwsAccountId": <<aws account id>>

    "DataSourceId": <<Datasource nam>>,

    "Name": <<datasoure name>>,

    "DataSourceParameters": {

       "MySqlParameters": {

            "Host": “<<Aurora hostname >>",

            "Port": <<port_number>>,

            "Database": "<<database_name>>"

        }

    },

    "Credentials": {

        "CredentialPair": {

            "Username": "<<user name>>",

            "Password": "<<password>>",

       

        },

        "CopySourceArn": ""

    },

    "VpcConnectionProperties": {

        "VpcConnectionArn": "<<VPC ARN>>”

    },

    "SslProperties": {

        "DisableSsl": false

    }

}

Monday, August 22, 2022

Migrating from AWS RDS MYSQL 5.7 to Aurora 3+ (8.0 compatible)

 

When we plan for a major upgrade, it requires a lot of planning activity in terms of testing and fixing issues.

As Aurora MYSQL is compatible with MySQL, our migration from MYSQL to Aurora will not create a major issue for our application.

Migration from MYSQL 5.7 to Aurora 3+, which is compatible with MYSQL 8.0, is not very easy.

Generally, major MYSQL upgrades require a lot of planning and testing. Major upgrades performed directly in the protection instance are not recommended.

Once upgraded, we can’t revert to the previous version of the database engine. If we want to return to the previous version, we can restore the first DB snapshot taken before the migration. Until the upgrade is completed, the DB server will not be available.

There are two ways we can upgrade.

·         There is very little downtime. When we do an upgrade

·         No downtime using AWS DMS.

This document covers the minimal downtime option.

Upgrading MYSQL 5.7 to Aurora 3+ requires the following steps.

1.       migrating from 5.7 to 8.0.23.

2.       RDS MYSQL 8.0.23 to Aurora 3+ migration using snapshot migration

3.       Change the application connection string

4.       Change the Quicksight datasource

 

Migrate MYSQL RDS from 5.7 to 8.0.23

As I said earlier, we shouldn’t directly try a major upgrade on the production servers. First it should be done in the testing environment because a major upgrade requires downtime. When we do a major upgrade, sometimes the upgrade will fail, because it will check for compatible issues. If there are any compatible issues, the upgrade will not continue. It will report issues in the UpgradeFailure.log file, which will be available in the logs and events section.



Following are the general issues listed in the AWS document site

·         There must be no tables that use obsolete data types or functions.

·         There must be no orphan *.frm files.

·         Triggers must not have a missing or empty definer or an invalid creation context.

·         There must be no partitioned table that uses a storage engine that does not have native partitioning support.

·         There must be no keyword or reserved word violations. Some keywords might be reserved in MySQL 8.0 that were not reserved previously.

·         For more information, see Keywords and reserved words in the MySQL documentation.

·         There must be no tables in the MySQL 5.7 mysql system database that have the same name as a table used by the MySQL 8.0 data dictionary.

·         There must be no obsolete SQL modes defined in your sql_mode system variable setting.

·         There must be no tables or stored procedures with individual ENUM or SET column elements that exceed 255 characters or 1020 bytes in length.

·         Before upgrading to MySQL 8.0.13 or higher, there must be no table partitions that reside in shared InnoDB tablespaces.

·         There must be no queries and stored program definitions from MySQL 8.0.12 or lower that use ASC or DESC qualifiers for GROUP BY clauses.

·         Your MySQL 5.7 installation must not use features that are not supported in MySQL 8.0.

·         For more information, see Features removed in MySQL 8.0 in the MySQL documentation.

·         There must be no foreign key constraint names longer than 64 characters.

·         For improved Unicode support, consider converting objects that use the utf8mb3 charset to use the utf8mb4 charset. The utf8mb3 character set is deprecated. Also, consider using utf8mb4 for character set references instead of utf8, because currently utf8 is an alias for the utf8mb3 charset.

 

Steps to migrate

1.       Select the database and press the Modify button



2.       In the modification screen, change the DB engine version and click continue.

3.       In the scheduling of modifications option, select apply immediately to apply immediately and click Modify DB instance.

Once it done it will take few min to complete.

Migrate MYSQL RDS 8.0.23 to Aurora 3+

 

1.       Select the database.

2.       From the Action drop-down button, select Migrate Snapshot.

3.       In the migrate database screen, give the proper database snapshot name, VPC, securitygroup, etc., and click on Migrate.

It will take few min to complete

Tuesday, June 14, 2022

AWS Security options

Now a days cyber security is very important concern in the world. Daily basis everybody getting knowingly or unknowingly lot of cyber-attacks. Protecting data and environment is became lot of challenges. AWS is providing lot of services to protect our cloud environment from cyber-attack. AWS security is shared responsibilities between AWS and us. AWS reduces burden of protecting the infrastructure and services offered by them. Our responsibility is to use the proper tools and services to protect the services what we are using.

Following diagram represent the shared responsibility mode which is provided by AWS. For more details please refer here

 


Next challenge is to protect our application environment. To protect  our environment, it better to follow leading industrial standard cyber security framework. Cyber security framework will provide a set of guidelines, standard and best practice which we can implement.

In the cyber security industry, National Institute of Standards and Technology (NIST) is widely used and popular cyber security framework. NIST cyber framework is having set of guidelines to mitigating organization cyber-attack. The five core framework functions of NIST are listed below

In the context of NIST, AWS is providing various security services to adhere NIST framework. As AWS is having so many securities feature, we no need to use all. Based on the application need we need to use proper security service to protect our application.



Following table contain details of the security services provided by AWS. Following are currently available service based on their site, it may get outdated after sometime as AWS is keep adding more services for security click here to see exact details



As AWS is providing more security services to protect, we need to choose proper tools to protect our environment. Following diagram represent sample security design

 


IAM : This service provides access control to AWS services. Using role groups, Roles and Policies we can control access to one or group of people.

Cloud Trail:  This service records all AWS account activity. This service will be helpful to monitor and detect any unauthorized access

Cloud Watch: This service will be helpful to monitor application. This service we can configure to collect all the application service calls.  By default, it is integrated with 70+ AWS Services.

VPC: Virtual private cloud (VPC) enables us to create virtual network

Secret Manager: This service will provide a feature to store and retrieve application secrets like password. This service easily enables us to store database passwords, access key and other secrets and rotate

KMS : Key management service will help us to create and manage encryption key to encrypt data in rest. It is integrated with aws services to simply encrypt and decrypt data.

Certificate manager: This service will be helpful to quickly create a certificate, deploy it on AWS resources, such as Application Load Balancing, Amazon CloudFront distributions, and APIs on Amazon API Gateway. This also enable us to create private certificate for internal resources

Cognito : This service will be helpful to build authentication control in web and mobile very quickly. Cognito provides user pools and identity pools.    

Security hub : Security Hub collects security data from across AWS accounts, services, and supported third-party partner products and validated against industrial security standards like CIS, PCI DSS and AWS foundation security.

AWS Shield : This service will be helpful to protect for DDoS (Distributed Denial of Service) attack

AWS WAF: This service will protect web applications or APIs against common web exploits and bots that may affect availability and compromise security of application.

AWS Inspector: This service continuously scan EC2 instances and container images for software vulnerability.

AWS Macie: Macie automates the discovery of sensitive data, such as personally identifiable information (PII) and financial data, to provide you with a better understanding of the data that your organization stores in Amazon Simple Storage Service (Amazon S3)

On top of it we can use additional vendor services to further enhance security. AWS security competency partners are available here