1. A review of Star Wars - The Force Awakens


    Last night mny family (wife and 2 kiddo's) saw the new Star Wars movie and IMO JJ nailed it. Like the opening scene - pan down from stars to a planet, then a large ship. I also liked that JJ had his own rendition of the cantina - that was awesome.

    My favorite scene was when Rey(sp?) piloted the Falcon like a hotrod in a car chase - something been done a 1000 times before - but the way it was shot - awesome. And who would not want mad skills like that?

    I was truly sad about Han - it felt more like a real death than any other in my TV watching life (save for Ricks wife in Walking Dead). Heck - I was sad today about it - it's just a movie but for a lot of us - Star Wars is a family we grew up with.

    Oh - and when I talk Star Wars, episodes 1-3 are not included - "These aren't the Star Wars you are looking for".

    I really like this one - and thought one could watch episodes 4-7 and not "mind the gap" at all - because it was so continous. So for that JJ - I thank you.

    But I felt cheated only on one part - how it ended - once again - bad guys build monolithic weapon, good guys find weakness and blow it up. Three-forths of Star Wars end the same way. So I wonder whats next.

    Closing thought.. "hey Empire.. err.. First Order .. your monolithic systems keep getting blown up - might I suggest distributed systems?"

    • Gregg

    Published on in

  2. Shell - examples

    For years Perl was my goto for any problem, but most of the time a decent shell script was just fine. And if one is running lots of commands and looking more at exit status, perhaps a shell script will do just fine.

    So there are the original single bracket [ ] tests, which were supplanted by the double bracket [[ ]] tests, and the math engine (( ))

    So the evolution was:

    [ $i -gt 12 ]   =>   [[ $i -gt 12 ]]   =>   (( i > 12 ))

    So for testing in shell, leave the 90's behind. ksh93/bash/zsh has all this.

    Test    Perl                      shell (ksh/bash/zsh)
    strcmp   ($var == 'hello world')  [[ $var == 'hello world' ]]
    regex    ($var =~ /^\w+$/)        [[ $var =~ ^[a-zA-Z0-9_][[a-zA-Z0-9_]* ]] 
    numeric  ($var eq 3.14159)        (( var == 3.14159 ))  # note the lack of $ sigil
    compound (12 < $var && $var < 16) (( 12 < var && var < 16 ))
                                      [[ -f $file && $file =~ hosts ]]

    add the below to a file and test out..

    #!/bin/ksh or /bin/zsh (ksh on roids)
    set -u  # do not allow empty/undeclared variables - helps with typos
    set -e  # bail on any failure. Can get rid of all those  && between commands.
    set -x  # enable debug mode; turn off with 'set +x'.  Huh?. I just accept it.
    function docleanup {
        echo 'this could be a cleanup tmp file function'
        echo and we got one arg: $1
    true && echo 'true is always true'
    false || echo 'false always non-zero exit'
    if true
        echo 'true test is always true'
     echo 'but the && || are shorter for one liners..'
    # echo to do multiple things on a test
    # simple for loop
    for host in localhost thirud.com.foo
       ping -c 1 -w 3 $host 1>/dev/null 2>&1 && echo $host up ||
        echo $host down
       ping -c 1 -w 3 $host 1>/dev/null 2>&1 ||
        { echo -n $host is down; echo ' .. its really down'; }
    #                          this semicolon is needed ---^ 
    # traps - trap the exit code and do something.
    # Unix has ~32 slots for doing stuff before tearing the process down..
    # The zero signal had no meaning for a long time, and it was then used 
    # for 'trap anything'
    trap 'we are leaving the program' 0
    trap 'docleanup Bob' 0
    # Assignment cannot have spaces, unlike other languages.
    value = 12                         # invalid
    value=12                           # set value to 12
    var=value           && echo $var   # simple way to set vars without whitespace
    var="$value"        && echo $var   # quotes -interpret variables
    var="${value}stuff" && echo $var   # delimit the variable name from other text..
    var='$value'        && echo $var   # string literal - no interpretation
    # the dollar in front of a variable dereferences it - thus 
    # var=12 is accessed by $var
    echo "cows go 'moo'" 
    var=   # sets to empty
    # getopt is old one, getopts is newer one..
    # these are functions, reads in options and sets OPTARG, OPTIND
    while getopts abc:d opt
        case $opt in 
        a) echo  I caught an 'a' ;;
        b) echo "I caught a 'b'" ;;
        c) echo "c had a colon, so next thing in line was set to \"$OPTARG\"" ;;
        d) echo 'just a d' ;;
        *) echo "Usage: $me [-a] [-b] [-c <string>] [-d]" ;;
    # shift out read-in options - since OPTIND is post-incremented, subtract 1 
    # and then shift that many options out. leaves $1 equal to first unprocessed
    # arg from the command line.
    shift $(( OPTIND - 1 ))
    # (( )) and [[ ]] are ugly becuase they were added to shell later and to 
    # not break older scripts that [ (symlinked to /bin/test). Most languages 
    # just say 'warning: your scripts will break', but shell so widely used all
    # over the OS, that would be A Very Bad Idea.  New features were instead 
    # bolted-on. Frankenstein had bolts.
    # number compare - the dollar sign not needed inside 
    (( var == 12 )) && echo "var = $var" || echo 'var != 12'
    # string compare
    [[ $s == unix ]] && echo $s
    #regex  - do not wrap the compare string
    [[ $s =~ nix  ]] && echo "\"$s\" is some unix variant"
    [[ =~ ^[0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*$  ]] &&
        echo "we have IP"
    # substitutes
    # rem - to extend shell, it looks like they took the approach 'cannot alter
    # any existing syntax. current syntax is golden. So we get ugly syntax:
    # substitution
    car=vw; car=${car/vw/Volkswagen}   # its a Volkswagen man
    song=lalalalala; song=${song/la}   # one too many la's
    echo man ksh to see a good explanation of the parameters
    # set if unset
    # man test    to see more..
    cat << HURL 
    I rem this easliy becuase its like a cat barfing up a furball
    so use 
       cat << HURL
       cat << HURL > file
         text to spit out, or to file
    cat <<- FURBALL
        Adding the hyphen to << will strip leading tabs.  Allows one to indent 
        heredoc and keep code looking better
    echo 'this will exec a command, sending output to a temporary file descriptor'
    echo 'yeah no more save to file'
    echo 'diff -w <(date) <(sleep 2; date +%s)'
    Redirect output or create new output channels (put within script)
    exec 2>/dev/null    # now send STDERR to /dev/null
    # open other channels for more outputs
    exec 3>errors
    exec 4>output
    exec 5>log
    echo 'use $( ) instead of `backticks` because nesting of commands works'
    echo $( echo $( echo $( echo $( echo 3 ) 2 ) 1 ) ) top
    cat <<- 'YAK'
        % echo $( echo $( echo $( echo $( echo 3 ) 2 ) 1 ) ) top
        3 2 1 top
        % echo ` echo ` echo ` echo ` echo 3 ` 2 ` 1 ` ` top
        -bash: 2: command not found
        echo echo 3 1 top
    # parsing - the shell parses in at least 3 passes:
    # original command:
    # SIZE=2000000; ls -l $(find . -size +${SIZE}c -print) 1>filelist.txt  2>&1
    # the semicolon separates the commands.
    # scan left to right for file redirections 
    # pipes are executed right to left
    # the 'in between' each pipe is what is processed left to right.
    # SIZE=2000000; ls -l $(find . -size +${SIZE}c -print) 1>filelist.txt  2>&1
    # the amper1 means 'address of 1' - yup - thats from C.
    # this would redirect STDOUT to a file, then redirect 2 to whatever 1 is 
    # pointing at.  Remember these are file handles. yes - if using > files 
    # will be truncated at this stage
    # the command has not been run yet. it is being setup..
    # which is why
    # cat file | sort > file
    # empties your file
    # setup redirects - command not run yet
    # currently, 1 and 2 are directed to the terminal. this will 'redirect' them. 
    # ls -l $(find . -size +${SIZE}c -print) 1>filelist.txt 2>filelist.txt
    # shell substitutions are made - command still not run yet
    # ls -l $(find . -size +2000000c -print) 1>filelist.txt 2>filelist.txt
    # then background commands are run - find discovered 3 files
    # ls -l file1.mp3 file2mp3 file3.mp3 1>filelist.txt 2>filelist.txt
    # finally the command is executed. 
    # ls -l file1.mp3 file2mp3 file3.mp3 1>filelist.txt 2>filelist.txt
    # Note 
    # 1>file 2>&1    is often mixed up with  2>&1 1>file
    # but if reading left to right, its easier to see what is wrong:
    # rem the redirects and STDERR/STDOUT are pointing at terminal when the
    # parsing begins.
    # STDOUT   terminal
    # STDERR   terminal
    # 2>&1     redirect 2 at whatever 1 is pointed at (STDOUT)
    # 1>file   redirect 1 to 'file'
    # final result
    # STDOUT   file
    # STDERR   terminal
    # if this was a cron job, the errors are not going to the file. This 
    # has given me a false sense that all is ok.
    # what I meant was: 
    # 1>file 2>&1 
    # parse that left to right, now my file will get STDOUT+STDERR

    Published on in

  3. Shell - lockfiles

    Lockfiles - learned this from a colleague - use symlinks. One one winner for creating a symlink

    # test out lock files
    # should withstand: kill (all signmals), and most races
    me=$(basename $0)
    # primitive locking - use symlinks that point to pid
    if [[ -L $lockfile ]]
        # still running or remove dead lock
        # could race here
        grep $me /proc/$(readlink $lockfile)/cmdline &&
            exit || rm $lockfile
    # create lock - exit if we lose
    # if we get the link, we win.
    ln -s $$ $lockfile || exit
    # cleanup except signal 9 (handled in readlink check)
    trap "rm $lockfile" 0
    # do stuff

    Published on in

  4. Git

    I started on Git a couple years ago and it is addicting. Thank you Linus for writing it and freeing it for others to benefit (just like the Kernel).

    I have a testing branch of code that I use in QA - that has all the changes in flight - so that could be 1-many things.. This makes piecemeal pull requests impossible since Github pull requests are computed against the branch.

    What was shown to me was:

    • Master branch - keep as a pristine point to branch from.
    • For pull requests, cherry-pick just those changes into clean branch and submit that..
    • This is one way to work. I reserve the right to change my mind later on and declare this post utter rubbish.

    Get the code

      git clone <ref>

    Update said code

    Get caught up before branching

     (master) % git remote add upstream <ref>
     (master) % git fetch upstream 
     (master) % git merge upstream/master

    Stay up late coding!

    Use testing branch for dev work

     (master) % git checkout -b testing


     (testing) % # write code and unit tests

    Branch for new commit

     (testing) % git checkout master
     (master) % git checkout -b ticket_18765342
     (ticket_18765342) % git cherry-pick <list of commit hashes for topic code>
     (ticket_18765342) % git push origin ticket_18765342

    Post code merge

    Now that the code is merged into master, fetch down all the changes, apply to the local master and your testing branch.

     (master) % git fetch upstream
     (master) % git merge upstream/master
     (master) % git checkout testing
     (testing) % git merge upstream/master

    Nuke that topic branch

     (testing) % git branch -D ticket_18765342

    Rinse and repeat

     (testing) % # write code and unit tests

    Published on in

  5. Internet Service Layers

    I have been in Operations now for a few years (full time) and in that time I struggled to find a way to elucidate the different functions of a server lifecycle from "dock to dumpster" so so say.. I am not set on the names but they get me closer to how internet services seem to work in general..


    Dock -> Racked -> cabled + switched -> firmware -> burn-in -> ready for an OS


    Everything from Boot to Fully armed and operational work station. A host at this stage is a clean-slate application server. Jumpstart/Kickstart/etc. OS image boot + network identity + Chef/Puppet.


    Automation around application deployment/control. This is a server that is ready to run applications.

    In closing

    These are different that the OSI layer in that if there is no application, forget the other layers. But it is also the same in that the layers build upon one another..

    Published on

  6. AJAX and JQuery for server-side actions

    I wanted to do this: click on something and have that data saved to a file.

    Found this: save data through ajax jquery with form submit and used the click instead from another page on the web and BAM! This POC accomplishes that. Knowing this, it is easy to make a database call or something. Then end goal is to add this to a home-coded gallery so we can wheat/chaff/tag 50k of photos. Also it's an excuse to learn Javascript.

    Note: Being this is a POC, there is no input sanitizing, or protections of any kind. It's just a POC.


    There are two files for this POC: html and a php file. Also This is set in /etc/httpd/conf.d/php.conf:

    AddType application/x-httpd-php .html

    Set the paths as needed and test out.

    HTML File

    <!doctype html>
    <html lang="en">
    <meta charset="utf-8">
    <title>click demo</title>
    p {
    color: red;
    margin: 5px;
    cursor: pointer;
    p:hover {
    background: yellow;
    <script src="//code.jquery.com/jquery-1.10.2.js"></script>
    <p>First Paragraph</p>
    <p>Second Paragraph</p>
    <p>Yet one more Paragraph</p>
    /* works
    $( "p" ).click(function() {
        alert("Sending data: " + this.innerHTML + " " + this.offsetWidth);
        $( this ).slideUp();
    $( "p" ).click(function(e) {
        //alert("Sending data: " + this.innerHTML + " " + this.id);
            type: "POST",
            url: "script.php?file=data/thumbs-www/log/file.txt",
            data: { text : this.innerHTML, id : this.id },
            success:function(data) {

    PHP File

    $file = "/var/www/html/" . $_GET['file'] . $ts;
    // create a json string of the data passed
    $contents = '{ ts : ' . $ts . ', ';
    foreach ($_POST as $name => $value) {
        $contents .= "$name : \"$value\", ";
    // cleanup
    $var = '/,\s*$/';
    $contents = substr_replace($var, $contents, 0); 
    $contents .= ' }' . "\n";
    file_put_contents($file, $contents);

    Published on in

  7. Shell - references and dynamic variables

    I learned the past week that shell programming with all its pitfalls can be quite once things are worked around. Now this is partially unfair - the shell has been told "run that old dusty code, and advance too"

    So there are the original [] tests, then came [[ ]] tests, and then more neat functionality with the math engine (( ))

    Ruby, Perl - they get to rev versions. The shell is not allowed to, sort of. So it seems the functionality was added in, via new testing.

    How else would you upgrade a language where one of the rules was "can't alter existing behavior".

    If anything, this helped me not get all frustrated there are 3 types of bracket/brace tests...

    Being Perl is my most comfortable language to code in, I had to find the shell equivs..

    Regex checks

    if ( $var =~ /[0-9][a-z]/) {
        printf STDERR "message\n";
    if [[ $var =~ [0-9][a-z] ]]
        print -u2 "message\n";

    There are plent of guides to show the various tests. What I like today was figuring this out

    % cat dyn.ksh
    function makeVar {
        eval "$1=$2"
        return $rc
    makeVar answer 42
    print -u2 "the var 'answer' was eval'd and \"answer\" has a value of \"$answer\""
    typeset -n c2=answer
    print -u2 "the variable c2 was  typeset -n c2=answer   (copy reference) and c2 has a value of \"$c2\""

    And running it yields that dynamic vars can be made.

    % ./dyn.ksh
    the var 'answer' was eval'd and "answer" has a value of "42"
    the variable c2 was  typeset -n c2=answer   (copy reference) and c2 has a value of "42"

    Published on in

  8. Install Ruby 1.9.3 on CentOS 6.3

    Categories: ruby, nesta

    Out of the box, Ruby 1.8.7 is installed on Centos 6.3

    This gent github.com/imeyer kindly posted the steps - they are repeated down below. Thank God someone posted this - I would have spent a lot of time figuring it out..

    yum install -y rpm-build rpmdevtools readline-devel ncurses-devel rpmdev-setuptree \
      gdbm-devel tcl-devel openssl-devel db4-devel byacc libyaml-devel libffi-devel make
    cd ~/rpmbuild/SOURCES
    wget http://ftp.ruby-lang.org/pub/ruby/1.9/${RUBY_VER}-${RUBY_SUBVER}.tar.gz
    cd ~/rpmbuild/SPECS
    wget https://raw.github.com/imeyer/${RUBY_VER}-rpm/master/ruby19.spec
    rpmbuild -bb ruby19.spec
    ARCH=`uname -m`
    KERNEL_REL=`uname -r`
    yum localinstall ~/rpmbuild/RPMS/${ARCH}/${RUBY_VER}${RUBY_SUBVER}-1.${DISTRIB}.${ARCH}.rpm

    Here is where my rpm had a different name, else all was good:

    find . -print | grep rpm
    yum localinstall ./RPMS/x86_64/ruby-1.9.3p484-1.el6.x86_64.rpm
    ruby --version

    Now can finally resume setup of code-highlighter

    sudo gem install rack-codehighlighter

    Published on