Cultivating Infosec Knowledge

I often get asked through both work and social media channels how and where do I obtain all of the Information Security knowledge that I routinely share. So I though I would share my own personal workflow for how I cultivate Infosec knowledge and others can use what I’ll describe in this blog post as a framework to build their own. I should point out that my workflow is dependent upon using a Linux distro that supports specific packages such as Weechat. If you are primarily a Windows user, you may need to make some adjustments

I often get asked through both work and social media channels how and where do I obtain all of the Information Security knowledge that I routinely share. So I though I would share my own personal workflow for how I cultivate Infosec knowledge and others can use what I’ll describe in this blog post as a framework to build their own. I should point out that my workflow is dependent upon using a Linux distro that supports specific packages such as Weechat. If you are primarily a Windows user, you may need to make some adjustments such as start using Ubuntu.

Step 1: Twitter

By far the best source for cultivating knowledge is Twitter. First there are tons of Information Security professionals from pretty much every domain of knowledge within Infosec. This involves of course obtaining an account(make sure you leverage 2FA) and following users who specialize in the area that your interested in. Another great feature are ‘Lists’. These are groups of Twitter users for a specific area. This one is a good start: https://twitter.com/DanielMiessler/lists/infosec. So got get yourself an account if don’t have one and start searching using hashtags such as #cybersecurity or #infosec.

Step 2: IRC Client That Logs Locally

You may be asking what is IRC and why do I need an IRC client to cultivate Infosec knowledge? This will become obvious as this post progresses, but IRC was an Internet standard draft created 20+ years ago to create a real-time chat network. The reason you want a modern IRC client that supports logging locally is that there is an IRC gateway called Bitlbee that enables you to integrate with Twitter and the like into the IRC client, which enables you to log all of that content for later reference and searching.

I personally use Weechat due to all of the plugins available for it and being able to leave it running 24X7 in a Tmux session. Think of Tmux as a means of running persistent terminal sessions.

Step 3: Bitlbee

As mentioned earlier Bitlbee is an IRC gateway that acts as a relay between your IRC client and the platforms it supports such as Twitter and Facebook. For my purposes the Twitter integration is key, because it basically turns your IRC client into a Twitter client and most importantly your Twitter timeline is logged locally as long as you have it running. This is where Tmux comes in so even if you log out your sessions are still running. This becomes advantageous when you want to pull out a bunch of links or content, all you have to do is grep through your Bitlbee Twitter logs.

Step 4: Slack & WeeSlack

Slack is a modern attempt to displace IRC utilizing web based API’s and pretty looking integrations such as emoji’s and integrations with a large number of automation technologies such as Splunk and devops apps. There is one Slack channel that has LOTS of Infosec peeps on it and it’s called Brakesec and is run by Bryan Brake. Follow him and send him a Tweet asking for access.

I use a very cool Weechat plugin called, WeeSlack that integrates with WeeChat and gives you the same great benefits that Bitlbee does with Twitter. WeeChat is turned into a full blown Slack client with logging.

Conclusion

With this setup I have a perpetual feedback loop that stores everything locally for referencing when ever you need to and with the content in plain text files you can query and extract it however you want.

Reformed.IO Project

I have recently purchased the Reformed.IO domain with the goal of providing collaborative means for Reformed Christians to commune.There has been of late a number of incidents on the major social media platforms (Facebook & Twitter) that have censored content specific to holding a Christian worldview. I do not think it is too unrealistic to see this trend continuing with the ultimate risk of the Christian witness being completely inoculated.

I’m currently investigating a number of different technical frameworks to use, but to host this and to make it successful will require funding. This will mostly amount to monthly expenses around paying for compute power. The more folks utilize the service the more it will cost to continue maintaining it and having it operate at acceptable levels.

I already have some high-level objectives defined and here the main ones:

  • Forum Boards – To discuss various topics such as Sola Scripture, Confessions, Christology, and the like.
  • File Exchanges – Ability to exchange files of interest
  • Collaboration Teams – Create teams for specific discussions and collaboration.
  • Real-time Chat

I’m estimating if there is a lot of activity with the service it will take about $30 a month to get things going. If you are interested in seeing this project take place, please consider becoming a monthly Patreon by clicking the link below. If just 30 people commit to $1 a month it would go live.

Patreon for Justin Andrusk

Python Script for Searching ExploitDB

biblical_apologetics_degree_wide

So I was looking to cleanup my Twitter favorites list and starting with the oldest one that was dated from 2011, it was from an article for using a Python script for searching the local ExploitDB instance on Backtrack.So of course it peaked my interest and click on the source link directed me to a parked domain. Common problem with Open Source tools. After performing some Google-Fu, I found a copy and downloaded it to my Kali instance and of course it didn’t work as the path for the ExploitDB path has changed.

 

So after a trivial change of pointing it to the correct path, bingo, it works.I have created a ‘Kali‘ repo on my Github if you want to grab it and I’m probably going to be making some updates to it over time.

Automating VirtualBox Snapshots

biblical_apologetics_degree_wideI depend a lot upon VirtualBox for my security-related research and testing. That being the case I make a lot of changes to my VirtualBox VM’s and losting a given state and not being able to rollback to last known good state would be very bad. Yes, you can take snapshots manually via the GUI or even by the means of the CLI. When you have over 20 VM’s that you manage this can be pain in the butt.

This is where scripting comes in, so I built some simple Bash scripts to automate this process and have it run hourly via Cron.

 

 

The first script simply outputs to STDOUT a list of all the VirtualBox VM’s in the system:

[bash]
vboxmanage list vms
[/bash]

This will simply produce the name and registration number of each VM you have defined on the system.

Now to automate the snapshot process we simply craft something like:

[bash]
for i in `vmlist | awk ‘{print $1}’ | perl -pi.orig -e ‘s/\"//g’`
do
echo "Creating snapshot for $i"
vboxmanage snapshot $i take $i-`date +%Y%m%d%H%M%S`
done
[/bash]

This will create a snapshot for each VM with the snapshot name of each VM followed by a date/time stamp. Put this script in your crontab and your good to go.

How to Install Firewall Builder 5 In Ubuntu


Firewall Builder is a GUI application that allows you to create sophisticated firewall rules. Currently only version 4 is available in the Ubuntu repositories, so here is how to install version 5 in Ubuntu:

1. From a Terminal window type: wget http://www.fwbuilder.org/PACKAGE-GPG-KEY-fwbuilder.asc -0- | sudo apt-key add - 

2. Add the line deb http://packages.fwbuilder.org/deb/stable/ VersionName contrib
   Where VersionName is the string of your Ubuntu version such as natty. 

3. From a Terminal window type: sudo apt-get update

4. From a Terminal window type: sudo apt-get install fwbuilder

Exporting Your Kindle Library


As much as I love the Kindle that I received this past Christmas, there are certain things that I have found wanting. Recently one of my Facebook friends asked me what books I had on my Kindle. I thought, “This should be a no brainer,  I’ll just export the list from the http://kindle.amazon.com as a CSV and send it off to her.” That would be great, except that the current “Manager Your Kindle” portal does not have an export feature of any kind. Then I had an epiphany; I used to use a cool e-reader app for Linux…What was it called…Oh yes, Calibre! So as any committed Ubuntu user would do I fired up the Ubuntu Software Center and found it and a couple of clicks later I had it installed on my desktop. Then I noticed there was not an export option in the GUI. Oh no! But wait, Google is my friend and I have a lot of confidence in my friend. I then ran across this handy little guide
that documents how you can access the Calibre database from the command line. Oh yes, geek heaven! Then all I had to do was to run the list command and I just had it return title and author in a nice fixed-column width format and piped it to a text file and sent to my Facebook friend.

So what is the moral of this story? You will inevitably run into obstacles when your are attempting to solve a problem. The point is to continue to divide and conquer and never give up until achieve your objective.

P.S. I’m still curious as to why as something so basic as an export function is lacking in the Amazon Kindle portal.

Death of Google Wave?

Looks like the lifespan of Google Wave has a life expectancy of about another year and it will be dead as a standalone project according to Google. According to Google they have not seen the level of use adoption that they would have liked. What does not make sense is how they describe the use and initial acceptance that Wave received. I’m beginning to sense a pattern with Google as this is similar to what happened with the Nexus phone when they decided to stop manufacturing them less than year after they were released to the public. What gives? Google opens it up to the masses, offers open-source API’s and less than a year later they dump the project?

Personally I’m going to take all their additional products and services with a grain of salt as the new stuff they offer seems to be about as stable as an egg in a blender.