Call   0208 3131682   Mon-Fri 9am-6pm
Tag

AWS Archives - Freelance Full Stack Developer in London, UK

Allowing access from one AWS service (EC2) to another service (S3), using an IAM Role

By | AWS, EC2, IAM, S3 | No Comments

In this post I’m going to allow EC2 to use S3 to store files. I’m going todo this 2 ways:

  • First simple access (less secure, not for production), EC2 instance can do anything on S3
  • Secondly , I will limit what the EC2 instance can do (read, write only on a specific bucket using its ARN )

Simple Access for EC2 to use S3

The Process

  1. Create Policy to allow access to service (S3 everything)
  2. Create Role (and attach policy from step 1)
  3. Give the EC2 instance the Role
  4. Verify it works using AWS CLI (from EC2 server), to upload a file

1 Create Policy to allow access to service (S3 everything)

  • Go to IAM Dashboard, then Create Policy from left menu.
  • Select the service to allow access to (S3 in this case)
  • Allow all Actions
  • Allow all Resources
  • Goto step step (Review policy)
  • Enter name and Create Policy (I called mine: GreenBoxAllAccessToMyS3)

2 Create Role (and attach policy from step 1)

  • Back to IAM Dashboard, select create Role from left side
  • Select the EC2 service (you want to allow this), and move to next step
  • Search for the Policy you created and check it (see screenshot below), move to next
  • Give the role a name and Save/Create ( I called it GreenBoxRoleAllowAllAccessToS3FromEC2 )

3 Give the EC2 instance the Role

  • Go to the EC2 instance, and attach the Role just created
  • Checkbox instance -> Actions -> Security -> Modify IAM role
  • Find the Role just created and Attach it

4 Verify it works using AWS CLI (from EC2 server), to upload a file

  • Finally SSH onto the server (you can use Connect in above image)
  • Create a file and upload it to S3 using command below:

aws s3 cp test.txt s3://for-testing-access/test2.txt

for-testing-access is my bucket.

Finally check in the bucket to make sure the file is there:

Version 2 – Access for EC2 to use S3 (limited permissions and specific bucket)

I’ll now detail how to tie this down abit, so that the EC2 instance with the Role can only do limited things (restricted permissions), to a specific Bucket (based on the ARN of that Bucket).

  • First copy the ARN of the S3 bucket to the clipboard (S3 dashboard)
  • Go back to our policy on IAM Dashboard ( I called mine: GreenBoxAllAccessToMyS3 ).
  • Search for it and then click it
  • Click ‘Edit Policy’
  • Under Actions section we’re going to edit so it just allows Read/Write access to a specific Bucket (it would be useful to allow List also, but I won’t for brevity here).

Check all the permissions (actions) that have Object in their names for Read and repeat for the Write section (so you can add Objects or read Objects from the bucket, you could limit it further but for speed click everything for Object).

Remove any actions / permissions so you only have to add an ARN for the objects (which are linked to a Bucket), ie the Resources section should be like below (and we’ll add the bucket ARN in):

We can then add the ARN of the specific bucket as above, and star for object (ie allow access to all Objects in Bucket).

Add , then select next / Review to go through the steps to save the policy.

Finally test as we did before (use SSH to test can we), add a file to the bucket ( then check in the bucket via the AWS S3 UI/Dashboard to see if its there).

References

https://aws.amazon.com/premiumsupport/knowledge-center/ec2-instance-access-s3-bucket/

https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html

Disclaimer: All content on this site, is use at your own risk (Always backup before changing anything in your software/database/servers etc). Techs change, go out of date etc...
I/we accept no liability if anything you use on this site adversely affects you.

Optimising Apache (for AWS EC2 micro or small instances)

By | Apache, AWS | No Comments

In this post I’ll talk about the optimising / configuring I did on Apache. So my site could live happily on its new home of AWS EC2 small instance ( running Amazon Linux ). The guide below covers sites that get maybe a few hundred page views per day (if you have bigger traffic you will probably need a bigger server).

I moved my site to AWS and it kept crashing Apache, I would restart it and it would be fine for while (maybe a few days or a week), then the same thing would happen again.

I looked in to the memory situation with Apache and found out it was launching a number of processes (too many, for the amount of RAM I had on my EC2 instance ). This is what I did to fix it. The command below can be handy to give you a quick view of whats currently being used (memory wise on your server).

free -m

Process

Check which MPM you running ( for small such as mine e.g. 1GB RAM , you want to be running prefork ):

httpd -V | grep MPM      

If that doesn’t say prefork you need to alter Apache conf or one of the conf files ( I had to edit /etc/httpd/conf.modules.d/000-mpm.conf , but it can be different based on your distro, Amazon Linux seems to be quite similar to CentOS 7). Its good practice to always make a backup of any config you change. In your apache dir eg /etc/httpd in my case (grep might help you locate ).

grep -r “prefork”

I had to comment out the other (mpm event module ) and uncomment prefork so I had this loaded:

LoadModule mpm_prefork_module modules/mod_mpm_prefork.so

I then added this in the same file ( grep to check if you have some of this already):

<IfModule mpm_prefork_module>

    StartServers             1

    MinSpareServers          1

    MaxSpareServers          1

    MaxRequestWorkers       15

    MaxConnectionsPerChild   0

</IfModule>

Because my memory was low. I needed to keep the Servers started low and MaxRequestWorkers also. This is how I calculated how to work out MaxRequestWorkers. First I grepped to see how many processes Apache had running:

ps -aux | grep apache

This showed me a bunch of php-fpm that I run this for to see the memory they were taking up:

ps -ylC php-fpm –sort:rss | awk ‘{sum+=$8; ++n} END {print “Tot=”sum”(“n”)”;print “Avg=”sum”/”n”=”sum/n/1024″MB”}’

This was telling me the memory the apache / php-fpm processes where taking up was about 60 MB per each , My MaxRequestWorkers was set to 256, so 256 * 60 MB = over 15GB of RAM required. Essentially Apache in the busier times was trying to simultaneously accept more connections that I had memory for on my EC2 instance ( I only had 600MB ish spare ). So needed to adjust this down ( 10 * 60MB = 600MB ).

MaxRequestWorkers is the amount of concurrent connections Apache will allow, I def need todo some optimising to get this figure up ( I only get maybe 200 page views per day so its ok for my site – and will be for many small sites / apps ).

Apache queues requests after MaxRequestWorkers reaches its max ( so new requests will wait to be serviced, this should be quick but depends, you may need a bigger EC2 instance if you have more traffic than you can handle ).

Other optimisations I did:

The main optimising the fixed my issues was the above, however I also added this to my apache conf file:

Timeout 60

KeepAlive on

MaxKeepAliveRequests 300

KeepAliveTimeout 2

I wont cover these but check out the liquid web link below (which covers these in detail).

Modules I removed:

Removing some Apache modules which would lower the MB used per Apache process ( so I could have more MaxRequestWorkers )

I grepped to find where modules where ( strewn about the place , but mostly in /etc/httpd/conf.modules.d/00-base.conf in my case):

grep -r LoadModule

Removed some modules, didn’t seem to make any difference so put them back in.

TODO: add what I did to my php-fpm config (www.conf)

References:

The video below is excellent:

https://www.linode.com/docs/guides/tuning-your-apache-server/

Disclaimer: All content on this site, is use at your own risk (Always backup before changing anything in your software/database/servers etc). Techs change, go out of date etc...
I/we accept no liability if anything you use on this site adversely affects you.