Learn, Tinker, Hack, Repeat...

Docker is the next form in the evolution beyond virtualization. A Docker image contains “everything” in it needed to run an application in a self contained environment. This makes it lightweight, self sufficient and portable to be run either on-premise or on the cloud. The Docker API provides multiple options to interface with images and containers. Managing many images and containers will become time-consuming without using a good UI for Docker.

Portainer is a simple, lightweight Docker management toolset that provides an easier way to manage Docker environments (hosts or swarms clusters).

Simplicity is the key to run Portainer. This guide will provide the simplest way to get started by deploying Portainer server on a standalone Linux or Windows host or single node swarm cluster.

1. Installing the Portainer server 

Setup on Linux and Windows* environment : 

$ docker volume create portainer_data
$ docker run -d -p 9000:9000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

Note :
  1. The first command will create a volume outside the scope of any container. 
  2. The second command runs the container at port 9000 with a restart always flag to allow restarting the container if it stops.
  3. These commands will install the Server. Agents are not necessarily needed for standalone hosts. 
* The same commands can be run on Windows 10 Build 1803 in PowerShell

You can manipulate, start and stop Portainer with the name specified during the launch, in this case portainer

$ docker stop portainer
$ docker start portainer 

2. Dashboard Access

The command now starts the server at port 9000.  You can access the admin page via http://localhost:9000 . You will be prompted to setup the admin account to sign into the portal. 

Choose the appropriate environment you want Portainer to manage and click connect. 

Once connected,  the Home page for the chosen environment details will be presented.

You can explore various options under Settings section to add users and groups to manage this server setup, change Registries, use Extensions etc.

Choosing the Endpoint will launch its full dashboard.

3. Downloading images and creating containers

To get started to creating images and containers , navigate to Containers and choose an image of your choice to pull. If you have added your DockerHub account under Registries, this will be used to download the appropriate image. 

Now you can opt to start the container from command line or within the Portainer itself. 

3.1 From the command line

Once the container is made ready, access the portal to view the status of the newly created container (as per this example myubuntu).

You can now explore the quick actions menu to view logs, inspect the container and access its console.

3.2 From the Portainer console

The console eases the steps to create a container from within the console. Navigate to Container > Add Container to pull the image from the registry and enable necessary advanced settings to create the container.


While there are more features than what meets the eye, this is all you need to get started with Portainer. If you want to explore more and learn about the deployment scenarios , read it’s detailed documentation.

This is a step-by-step guide to creating multiple user accounts in Amazon EC2 Linux instances, using individual self-generated key pairs . This helps small organizations to allow multiple users to get access to such instances without having to share keys or accounts.

The public / private key pair is generated on your local machine and the private key is uploaded to S3. When launching the EC2 instance via the wizard, you can now choose to Proceed without a key pair.

For Linux / Mac users :
  1. To create Public and Private keys use the following command
$ ssh-keygen -t rsa -b 4096        (This creates a 4096 bit RSA key pair)
      2. Upload the public key to a folder in your S3 bucket. For example :
S3 > MyBucket > Keypair
      3. Save and secure your private key.

For Windows users :
  1. Use puttygen to generate the keys.
  2. Follow tutorials in DigitalOcean to create SSH keys.
  3. Upload the public key to S3 > MyBucket > Keypair
  4. Save and secure your private key.
The following steps are important during the launch of any Linux AMI.
  1. Ensure the IAM role has a role created with AmazonS3FullAccess policy. This allows the instance to assume a role to access the S3 buckets. This is needed to read the public keys from S3 and copy them to the user profile.

  2. IAM > Roles

    Launch Instance > Configure Instance Details 

  3. Add the following code under the user-data section in Configure Instance details > Advanced Details (as Text) :

######## AWS LINUX #########

useradd user1
usermod -aG wheel user1
mkdir /home/user1/.ssh/
aws s3 cp s3://MyBucket /Keypair/user1-pub.pub     /home/user1/.ssh/authorized_keys

useradd user2
usermod -aG wheel user2
mkdir /home/user2/.ssh/
aws s3 cp s3://MyBucket /Keypair/user2-pub.pub     /home/user2/.ssh/authorized_keys

sudo -i
echo "user1 ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
echo "user2 ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers

yum update -y

######## UBUNTU #########

apt-get install -y awscli
useradd user1
usermod -aG sudo user1
mkdir /home/user1/.ssh/
aws s3 cp s3://MyBucket /Keypair/user1-pub.pub     /home/user1/.ssh/authorized_keys

useradd user2
usermod -aG sudo user2
mkdir /home/user2/.ssh/
aws s3 cp s3://MyBucket /Keypair/user2-pub.pub     /home/user2/.ssh/authorized_keys

sudo -i
echo "user1 ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
echo "user2 ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers

apt-get update -y

This setup creates User1 and User2 and adds them to sudo users. The aws s3 cp command copies the users public keys from the S3 folder to their .ssh/authorized_keys path. The last section is to run commands as admin without needing passwords.

There are lots of security improvements that can be recommended here. While not explicitly used in this example, limiting S3 bucket access to a specific bucket and knowing the security implications of disabling password usage in sudo, are few things that can be highlighted. Use them wisely based on your particular needs.

Metasploitable3 has been around for quite a while and has been used by professionals, students, and researchers alike to improve their skillset. This has improved a great deal over the previous generation Metasploitable2. With new exploits were coming out out every day, the community needed something more where you dont get satisfied with straight-forward to play environment to get a high privileged shell.

I tried building this automatically several times as posted on Github , but it failed mostly when vagrant uses up the /tmp space (where i allocated ~1.5GB).

If you face this issue, replace the instruction posted in the manual install section at step 2 with
$ TMPDIR = /var/tmp packer build windows_2008_r2.json
and continue with the rest of the steps. It takes around 10-15 minutes to finish and once ready , you are greeted with

As this is a trial license of Windows Server 2008 R2 , you may want to read the wiki page for Windows Product key and Tips and Tricks for building a persistent VM.

I shall be writing a Metasploitable3 walkthrough soon. Your feedback is welcome. 
VMware Workstation 12.x does not compile correctly on Linux Kernel >= 4.6. Read the update section for Kernel 4.9 fix

Once the installation completes and tries to start the services, it throws several errors including missing kernel headers.

Workaround includes getting the kernel path headers correct and creating the correct symlinks for the location

cd /lib/modules/$(uname -r)/build/include/linux
sudo ln -s ../generated/utsrelease.h
sudo ln -s ../generated/autoconf.h
sudo ln -s ../generated/uapi/linux/version.h
Once the symlinks are ready, the path is
/usr/src/linux-headers-$(uname -r)/include

If the VMware still encounters issues with its services not starting , you will need to make changes in VMware modules C code and recompile.

Locate the source for vmmon.tar and vmnet.tar usually found under

Untar vmmon.tar and under  ./vmmon-only/linux/hostif.c replace all get_user_pages to get_user_pages_remote. Now tar and replace original file

Similarly, untar vmnet.tar and under ./vmnet-only/userif.c replace all get_user_pages to get_user_pages_remote. Now tar and replace original . This has been successfully compiled and tested on Linux kernels 4.6 and 4.7

If for some reason, the module updater asks for GCC (though earlier versions exist) , follow the steps here for new compiler setup.

If you want to install VMware fresh, try uninstalling via cli using
sudo vmware-installer --uninstall-product vmware-workstation

Update for Kernel 4.9 :

If you are now on kernel 4.9 , do the following


    cd /usr/lib/vmware/modules/source
    tar -xf vmnet.tar
    tar -xf vmmon.tar
    cd vmnet-only/
    gedit userif.c

Change at Line 113

    retval = get_user_pages(addr, 1, 1, 0, &page, NULL);
    retval = get_user_pages(current, current->;mm, addr,1, 1, 0, &page, NULL);


    retval = get_user_pages(addr, 1, 0, &page, NULL);
    retval = get_user_pages(addr, 1, 1, 0, &page, NULL);
    retval = get_user_pages(current, current->mm, addr,1, 1, 0, &page, NULL);

 STEP 2:

    cd ..
    cd vmmon-only/linux/
    gedit hostif.c

Change at line 1165

    retval = get_user_pages((unsigned long)uvAddr, numPages, 0, 0, ppages, NULL);
    retval = get_user_pages(current, current->mm, (unsigned long)uvAddr,numPages, 0, 0, ppages, NULL);


    retval = get_user_pages((unsigned long)uvAddr, numPages, 0, ppages, NULL);

    retval = get_user_pages((unsigned long)uvAddr, numPages, 0, 0, ppages, NULL);
    retval = get_user_pages(current, current->mm, (unsigned long)uvAddr,numPages, 0, 0, ppages, NULL);

    cd ..
    cd ..
    tar -cf vmnet.tar vmnet-only
    tar -cf vmmon.tar vmmon-only

Credits goto RGLinuxTech for this patch.

Metasploit often throws an error that its database cache is not yet ready and will continue using slow search.

To fix this issue, ensure postgresql is started and check status
$ sudo service postgresql start
$ sudo service postgresql status 

Re-initilaize msfconsole and rebuild the database cache
$ sudo  msfdb init
msf> db_rebuild_cache
msf> db_status
The search should be ready now

If this doesnt work, try the following command to re-establishing the database connection to complete the trick
msf > db_connect -y /usr/share/metasploit-framework/config/database.yml

Vijay Vikram Shreenivos