Monday, June 12, 2023

Why do DDoS attacks happen?

 Like just about every company, Our Company has experienced an attempted DDoS (distributed denial of service) attack on our product surface area over the past couple of years.

In the case of our company, it had no significant effect because of the strong security measures built into our system. As part of a standard application security examination, we detected this attack and “stopped” those requests at an early stage. I, as CTO, then informed the Exec Team of the incident, stating that we had received a high-level DDoS attack. It appeared that the perpetrator was from Russia (see below) and was attempting to attack our application via a German VPN, which we immediately blocked.

Too many companies try to hide or understate the severity of attempts to breach our cyber security defences but I wanted to write this post because, by sharing information on real-life examples, we can help others ensure their cyber defences are as robust as possible




Some of the questions for me were:

Why try to attack our application?What benefit is there to the attacker from attacking us?

Why did the hackers choose Our Company ?

I believe these are questions that should be asked, not just for us, but for most companies.

Initially, I had no responses to the those questions. I started investigating and believe that the following could be some of the causes:

1. Activism through hacking

2. Political motivation

3. Retaliation

4. Negative brand image

5. For pleasure or learning

In our situation, I believe it was purely for pleasure, and the popularity of our product may be one reason for the attention..

The following are the various types of DDoS assaults.

1. Application Layer Attack/Layer 7 Attack: The hacker utilises several bots or services to submit a http or https request to the application frequently. The most frequent type of assault is an HTTP “flood attack”, in which the attacker uses a bot to send HTTP GET or POST requests to the server from a different IP address. This attack is tough to counter since the application attacker changes his identity and IP address.

2. Protocol attack / Layer 3 or Layer 4 attack: Protocol-based attacks are primarily concerned with exploiting a flaw in the OSI Layer 3 or Layer 4 layers. TCP Syn Flood is the most popular protocol-based DDoS assault, in which a series of TCP SYN queries directed at a target can overwhelm and render it unavailable.

3. Volumetric Attack: These types of attacks try to cause congestion by absorbing all available bandwidth between the target and the entire Internet. Large amounts of data are transmitted to a destination via amplification or another method of creating massive traffic, such as botnet requests.

In our scenario, we were targeted at the application layer.

To prevent these type of attacks, My Digital have developed numerous preventative approaches. The following are some of the ways for preventing DDoS attacks:

1. Create an effective monitoring system Continuous monitoring is the method in which an organisation continuously monitors its applications, IT systems, and networks in order to detect security threats, performance difficulties, or non-compliance concerns in an automated manner. The goal is to detect potential problems and threats in real time and solve them as soon as possible.

2. Identify problems early in the development process Follow the OWASP TOP 10 best practices when writing code and perform static and dynamic code analysis to detect any early vulnerabilities. To secure applications, employ open-source code vulnerability tools to detect any open-source library vulnerabilities.

3. Create a strong internal and external security network. Avoid exposing unnecessary ports and IP addresses. To prevent malicious activities, use a good network firewall, intrusion detection tools, and endpoint security. Use a web application firewall at the application level.

4. Use your cloud providers best practices All cloud providers offer best practices and tools to safeguard the environment and applications. To avoid an attack, follow their best practices.

Build redundancy and practical back up procedures on top of all of this to eliminate single points of failure

Deploying Laravel Code into AWS Lambda

 

AWS Lambda does not directly support the PHP runtime. When creating an AWS Lambda function using PHP code, we must use containerization or the Bref package. In this article, I'll show you how I used the bref package to deploy Laravel API services to AWS Lambda function.

We must first install the serverless framework. The serverless framework will aid in the development and deployment of code to AWS Lambda. This framework is a Node.js-based free and open-source web framework. The Serverless Framework is a command-line tool that deploys both your code and the cloud infrastructure required to create a plethora of serverless application use cases using simple and understandable YAML syntax.

Step 1: npm install -g serverless

If you have a Laravel project, please get into the root directly, or else create a new project using the following command:

Step 2: composers create-project laravel/laravel example-app

In the root of the Laravel project. Install the Laravel packages using the following command:

Step 3

composer require bref/bref bref/laravel-bridge --with-all-dependencies

If you get an error message about dependencies, use the following command:

composer require bref/bref bref/laravel-bridge --update-with-dependencies

 

Now you have to create a serverless.yaml file. To generate a YAML file, use the following command:

Step 4

php artisan vendor:publish --tag=serverless-config

After this command, you can see a serverless.yaml file.

As follows

service: <<Name of the service you want>>

provider:

    name: aws

    # The AWS region in which to deploy (us-east-1 is the default)

    region: eu-west-1

    # Environment variables

    environment:

        APP_ENV: local # Or use ${sls:stage} if you want the environment to match the stage

package:

    # Files and directories to exclude from deployment

    patterns:

        - '!node_modules/**'

        - '!public/storage'

        - '!resources/assets/**'

        - '!storage/**'

        - '!tests/**'

 

functions:

    # This function runs the Laravel website/API

    web:

        handler: public/index.php

        runtime: php-81-fpm

        timeout: 28 # in seconds (API Gateway has a timeout of 29 seconds)

        events:

            - httpApi: '*'

 

    # This function lets us run artisan commands in Lambda

    artisan:

        handler: artisan

        runtime: php-81-console

        timeout: 720 # in seconds

        # Uncomment to also run the scheduler every minute

        #events:

        #    - schedule:

        #          rate: rate(1 minute)

        #          input: '"schedule:run"'

 

plugins:

    # We need to include the Bref plugin

    - ./vendor/bref/bref

 

The final step is to deploy this into lambda. To deploy Lambda code, you must first have an AWS access key and a secret key with appropriate permissions. To save that configuration in your local machine, use the following command.Step4

Step 5

serverless config credentials --provider aws --key <key> --secret <secret>

Finally, run the following command to deploy

Step 6

Serverless deploy

Monday, February 13, 2023

Open source libraries or programming languages: Are they vulnerable?

 


As an architect, when I recommend open source, the team expresses many concerns about open-source security. If the source code is open, does it become vulnerable? I say no. Even closed source code is vulnerable and subject to attack, so both closed source code and open source code are equally vulnerable if we do not pay attention to security measures.

Some people will argue that closed source is more secure because the source code is not publicly available and it is harder for attackers to crack the code. Source code is needed to develop a new feature, attackers will not develop any new features to attack. The attacker's goal is to discover a flaw to exploit.

In general, attackers use a variety of tools to identify security flaws in a system, regardless of whether it is closed or open source. They do not require the source code in order to crack. 

Generally, open source has a lot of advantages in terms of getting security fixes very quickly compared to closed source. The community maintains open source, which is mostly active in looking for security flaws and providing fixes as soon as possible. Closed source means we need to wait for the vendor to release a fix.

Some argue that as long as the source code is available, Trojan viruses could be introduced. It is all dependent on which open source software you are using. Mostly, the source code community will do the code review before it goes to release, so it is not easy to introduce a Trojan virus.

Before using any open-source technology, find out the answers to the following questions:

  • The open-source community is active.
  • One person or a group of people maintains open source software.
  • how frequently codes are updated and releases are made available.

On top of it, use a proper static code analysis tool at the time of development and use open-source security vulnerability management tools to identify open-source vulnerabilities.

My final piece of advice is to not be constrained when selecting any technology, tool, library, or framework; instead, choose what is best suited for your team and match it with the business use case. Security should be part of your process and day-to-day activities, so just don't be strict only with code. Some people think once they have selected a framework for development, it will automatically take care of security, This is not correct.

Tuesday, January 31, 2023

PHP Updates: Why We Need to Live With the Latest Version

 

This article was inspired by the shocking results of W3Tech's survey report. According to this report, 77.7% of websites use PHP, so the good news is that PHP is still a very popular language for website backend code.

When we look at the php version used by websites, the statistics are shocking, with approximately 90% of sites using outdated php versions. Any programming language is not vulnerable by default if it receives frequent updates; otherwise, it is vulnerable.

The PHP version and support cycle as of Jan. 31, 2023 are shown in the table below from php.net. 8.0, 8.1, and 8.2 are supported versions.

 


 

What will you miss if you do not update?

  1. Security update: If you are not using a supported version, you will not receive security updates for known vulnerabilities; therefore, it is critical to keep your application up-to-date in order to secure it. Known vulnerabilities are available in the CVE details.  The OWASPTop 10 Vulnerable and Outdated Components this article covers the importance of software updates.
  2. Performance: Look at the PHP version performance benchmarks to see what you are missing by not updating.
  3. New features Look at php.net site to see what new feature is missing because of the lack of updating.

Wednesday, December 28, 2022

AWS Disaster Recovery

Disaster recovery planning and business continuity planning are very important for any organization to come out of a disaster very quickly. Disaster recovery for on-premises environments requires a lot of effort and planning because it involves a lot of third-party services like transportation, network connectivity, etc., and various staff help to setup networking, systems, etc. Disaster recovery planning for the cloud will not require much effort, but we need to have strong planning to build redundancy to recover very quickly.

The resilience of the AWS cloud environment is a shared responsibility.  AWS infrastructure is available across different AWS regions. Each region is a fully isolated geographical area; within each region, multiple isolated availability zones are available to handle failure. All AWS regions and availability zones are interconnected with high bandwidth. When we use AWS as a cloud, we have various options to manage the high availability of the system.



Within region high availability

Regions represent a separate geographic area, and availability zones are highly available data centres within each AWS region. Each availability zone has isolated power, cooling, networking, etc. AWS provides a built-in option for dealing with an availability zone outage. We have to configure our environment with multi-AZ redundancy so that if an entire availability zone goes down, AWS is able to failover workloads to another availability zone. Within a region, the high availability architecture option will ensure compliance by keeping data in the permitted region and ensuring high availability.




Cross region high availability

A multi-region disaster recovery strategy will be helpful to address the rare scenario of an AWS region being down due to a natural disaster or technical issue. Very highly sensitive applications are required to plan cross region replication options. When we plan this approach, we need to consider the AWS availability for each service. Most of the AWS services are committed to high availability. Cross region high availability can be achieved in different ways based on our budget and compliance needs. We need to choose the proper strategy.


  • Back up and restore
  • Pilot light
  • Warm Standby
  • Multi-site active/active

 

Backup and restore

This approach will help us to solve the data loss issue. This approach will have a high RPO and RTO rate. RPO will determine how frequently we schedule data backups. As the environment is not yet ready, building an environment using backed up data will take time, so our RTO is also very high.





Pilot light

This approach will replicate the data to another region and also set up a core infrastructure. Servers are switched off and will be used when needed for testing and recovery. This approach will reduce the RTO and RPO based on the backup schedule. This approach is cost effective in terms of recovery, but database corruption or any malware attack still require a backup.





Warm Standby

This method is similar to pilot light, but a scaled-down version of the environment is now operational. Disaster and recovery testing can be carried out anytime, so comparatively, this will improve the confidence of those who recover quickly. RTO will slightly improve when compared to pilot light, and RPO is based on the replication schedule.



Multi-site active/active

In this approach, both sites in different regions will be active and running. Requests will be distributed across regions by default. If any one of the regions is down, another region automatically picks up the request. This approach is the most costly. RPO and RTO will be reduced to near zero, but backup will be required if there is any data corruption or malware attack. 




These strategies increase the possibilities of high availability in a disaster scenario. Each strategy addresses a subset of disasters but not all of them. Depending on the disaster, RPO and RTO will change.

 

Cross region and cross account high availability

For security or compliance reasons, many organizations require complete separation of environments and access between their primary and secondary regions. This helps mitigate the malicious threat to an organization that comes from people within the organization or any malware attack on our primary account. Having our backups or primary database routinely copied to the secondary account will help to recover the primary account.

The AWS backup feature can be used to backup data across accounts. AWS Backup is a fully managed service for centrally and automatically managing backups. Using this service, we can configure backup policies and monitor the activities of our AWS resources in one place.

Tuesday, December 27, 2022

High Availability and Disaster Recover.

 


Often, people are confused about high availability and disaster recovery because availability and disaster recovery will share some of the same common best practices, like monitoring issues, deploying to multiple locations, and automatic failover.

Availability is different from disaster recovery in terms of objective and focus. Availability is more concerned with ensuring high system  and application availability in a given time frame. Application availability is calculated by dividing uptime by the total sum of uptime and downtime. Disaster recovery is concerned with the recovery of applications, environments, and people from large-scale disasters. To determine the goal of disaster recovery, the two most important parameters, Recover Point Objective (RPO) and Recovery Point Objective (RTO), are used. 

Availability

Disaster Recover

High availability is about to eliminate single points of failure.

Disaster recovery is a process contains set of policies and procedures will trigger when loss of high availability 

High availability helps us make sure our system is operational in identified failure scenarios.

Disaster recovery and business continuity planning address man-made disasters such as cyber attacks, terrorism attacks, human error, and natural disasters such as floods, hurricanes, earthquakes, and so on 

Application availability is calculated by dividing uptime by the total sum of uptime and downtime.

Depending on the disaster, the following two main objectives will be defined:

         Recover point objective (RPO): amount of data loss due to disaster

         Recover Time Objective (RTO): Maximum amount of time required to recover an application

 

High availability system will be build with proper redundancy.  In the cloud, multi-AZ and multi-region hosting will help to ensure high availability.

A proper disaster recovery strategy will help to recover quickly from a DDoS attack, pandemic-like situation like COVID.

 Any product company needs to plan for high availability and disaster recovery for business continuity. High availability planning protects us from high probability events that occur on a regular basis.  Instance and data storage failure due to software or hardware issues, a web server not responding due to an unexpected issue, a load-induced outage, and so on are all common occurrences; hosting an instance in more than one availability zone or region will help to resolve this issue. 

The disaster recovery process will help to recover from major outages caused by human-made or natural disasters. Creating a backup of data storage in a different data centre and storing a month's worth of data will aid in data recovery in the event of data loss or data corruption, as well as protecting data loss from cyber-attack. Replicating the data storage and environment in a different location will help to recover quickly from complete environmental failure, but this will not be helpful for data loss or corruption issues.

Over all, High Availability and Disaster Recovery are aimed at the same problem: keeping systems up and running in an operational state, with the main difference being that HA is intended to handle problems while a system is running, while DR is intended to handle problems after a system fails.

Friday, November 4, 2022

Installing New relic PHP agent in AWS EC2 ARM64 Graviton family

 

Installing New Relic PHP agent is very simple. Once we upgraded the instance to the AWS EC2 arm64 family, we were unable to install php agent using the usual steps. When we dug around, we found the following link, which solved our issue.

https://docs.newrelic.com/docs/apm/agents/php-agent/installation/php-agent-installation-arm64/

Following the steps that we followed to install in the Amazon linux 2 instance I hope it will be helpful for others.

1.       SSH to an EC2 instance

2.       In the root folder, create a new folder called “newrelic”

mkdir newrelic

3.       Get into the newly created folder

cd newrelic

4.       As per above mentioned link new relic php agent code is available in the following github path

 https://github.com/newrelic/newrelic-php-agent

Clone new relic php agent code to compile. Run the following command in the command prompt
 git colne php agent to compile for arm64

5.       Once the code is cloned, a new folder called “newrelic-php-agent” will be created in the cloned folder. Get into that folder to compile the source code

cd newrelic-php-agent/

 

6.       Follow the following steps to compile and install the new relic php agent

a.       Run sudo make all in the newrelic-php-agent folder to build. it will take a few minutes

b.       Once it is compiled, run sudo make agent-install

c.       Run the following command to create log folder

sudo mkdir /var/log/newrelic

d.       Run the following command to give full permission to the log folder

sudo chmod 777 /var/log/newrelic

e.       Run the following command to copy deamon file to the appropriate folder

sudo cp bin/daemon /usr/bin/newrelic-daemon 

 

7.        Copy newrelic.ini file to php ini path. In amazon linux, additional php ini are stored in /etc/php.d/

a.       To find out ini file path location. Run following command

To find additional ini file path:  php --ini | grep 'additional'

To find ini file path:  php --ini | grep ' Configuration File'

To copy newrelic.ini file to the appropriate path run the following command

sudo cp agent/scripts/newrelic.ini.template /etc/php.d/30-newrelic.ini

8.       Once newrelic.ini copied, edit the file and change the  following parameters

a.       To edit file, run the following command

vi /etc/php.d/30-newrelic.ini

b.       Update the following parameters and save

Add a valid license key to the line newrelic.license = "INSERT_YOUR_LICENSE_KEY".          

Change the application name that's shown in one.newrelic.com on the line newrelic.appname = "PHP Application"

 

9.       Once everything is done, run the following command to restart php,httpd servers and start new relic service

sudo systemctl restart httpd.service

sudo systemctl restart php-fpm

/usr/bin/newrelic-daemon start

 

 The above steps will be helpful to install the new relic php agent. If you encounter any issue, the following link will be helpful to troubleshoot

https://discuss.newrelic.com/t/php-troubleshooting-framework-install/108683