AngularJS - Stick a Fork In It!

by Sean Murphy 16. February 2016 09:07

The Scenario

So I was working away on an EF6 / WebAPI / AngularJS project when I ran into an odd issue.  I had secured my WebAPI using OAuth and required that the user provide a bearer token on every request.  The strategy / workflow worked like this:

1. User browses to 

2. Angular's route provider recognizes this route and tees up a loginController and renders the view from the loginTemplate.html

3. User provides username/password and clicks the "Login" button.

4. The loginController component leverages an injected loginService to submit the credentials to the WebAPI on the middle-tier.

5. The WebAPI attempts to authenticate the user and if successful, provides a bearer token in the response payload.

6. The loginService stashes the bearer token in local storage to be used with subsequent calls to WebAPI courtesy of a custom interceptor added to the $httpProvider which would lookup the available bearer token and inject it into each request.

7. Should this bearer token become invalid, the WebAPI would return a 401-Unauthorized for any attempt to call a method which required authorization, and the user would be shuttled back to the login page.

All was fine and dandy in the world and this worked well.  However after playing with it a few times, I thought it would be nice to do the user a courtesy and take them back to the original page they requested that resulted in the 401 initially once they successfully re-authenticated.  This required jamming a path onto my URL like this<originally attempted uri>.

This was no problem and leveraging the ability to use wildcard route group parameters in Angular made it easy - I simply setup my routeProvider's when condition to look for "/login/:destination*".  This way - I could capture the whole original path - which very likely contained slashes.  A user could attempt to go look at /projects/4/sites/1, get a 401-Unauthorized back and be directed to /login/projects/4/sites/1 and be carried over to /projects/4/sites/1 easily after providing their login credentials.

The Problem

The issue I very quickly encountered was the users who navigated straight to /login from the home page of the site were met with a sharp return to the / URL ... as though the request to /login didn't match any of the defined routes in the routeProvider.  However, a request to /login/ worked just fine.  It became apparent to me at that moment that even though I was using a wildcard parameter on the /login/:destination* route - the parameter was still a required parameter, which resulted in the route requiring a trailing slash.  Trailing slashes never look good, and trying to tell users that if they simply wanted to browse to the login page, they'd need to take care to include a trailing slash was a non-starter, and I really didn't want to include two separate, but very similar-looking, route definitions for "/login" and "/login/:destination*".  I knew that Angular also provided a convention for optional parameters too by using the "?" character on the end of the parameter group name.  So "/login/:destination?" would serve requests for both "/login" and "/login/<some destination>" ... as long as that destination parameter group didn't contain slashes.  I tried using the syntax of "/login/:destination*?" to make the parameter group by wildcard AND optional - but to my dismay, Angular only recognized a single character on the end of the parameter group's name.  It could either be wildcard (*) or optional (?) but not both.  The giant door of disappointment slammed shut.  You can't get there from here.  Take a hike and get bent.

The Solution

After despondently staring into my coffee cup for a bit, I decided "to heck with it!  I'll just modify the Angular core .. which is ALWAYS a fantastic idea for portability and long-term maintainability! </sarcasm>"  I dug into the angular-route.js file and found the pathRegExp function which tore apart the route path, looking for the two options for wildcard or optional parameter groups, and beat up on the regular expression that made the match.  I refactored the code as such:

Original Code

      path = path
      .replace(/([().])/g, '\\$1')
      .replace(/(\/)?:(\w+)([\?\*])?/g, function(_, slash, key, option) {
        var optional = option === '?' ? option : null;
        var star = option === '*' ? option : null;
        keys.push({ name: key, optional: !!optional });
        slash = slash || '';
        return ''
          + (optional ? '' : slash)
          + '(?:'
          + (optional ? slash : '')
          + (star && '(.+?)' || '([^/]+)')
          + (optional || '')
          + ')'
          + (optional || '');
      .replace(/([\/$\*])/g, '\\$1');


Refactored Code

      path = path
      .replace(/([().])/g, '\\$1')
      .replace(/(\/)?:(\w+)(\*\?|[\?\*])?/g, function(_, slash, key, option) {
        var optional = (option === '?' || option === '*?') ? '?' : null;
        var star = (option === '*' || option === '*?') ? '*' : null;
        keys.push({ name: key, optional: !!optional });
        slash = slash || '';
        return ''
          + (optional ? '' : slash)
          + '(?:'
          + (optional ? slash : '')
          + (star && '(.+?)' || '([^/]+)')
          + (optional || '')
          + ')'
          + (optional || '');
      .replace(/([\/$\*])/g, '\\$1');

From careful examination of the highlighted expression in the refactored code above, you can see that it now also searches for "*?" on the end of a parameter group and sets the "optional" and "star" variables accordingly. This allows the route processor to create the necessary additional routes to handle the route with or without a trailing slash, and with a parameter group that might contain a ton of slashes, honoring the wildcard requirement.

Thinking that this was a pretty cool feature, and not wanting to be horribly selfish (or create a bunch of code that would always need to be maintained and updated with extreme care as to not overwrite my carefully applied custom patches, I decided to contribute to the AngularJS project on GitHub.  With stars in my eyes and the hopes that the Angular gods would bless my modifications to their code, I began down the winding path of how to submit a pull request to the GitHub repository where AngularJS lives.

The Process (and the pain)

As with most things computer-programming related, the perception is that it's easy.  You just draw your boxes on the screen, and then it just works and you're done, right?  I thought - well, this is simple enough .. I'll just fork the AngularJS repository over to my GitHub account, branch, clone, make my various mods and changes, commit, push and then submit a pull request.  Easy as pie, right?

But then there are standards.  And there are spaces.  Sometimes too many of them.  Especially when you're switching between Windows and Unix environments, and tabs either becomes spaces or vice-versa.  

I spent a little time looking through: I wanted to make sure that I honored all the odd little requirements, e.g. "Wrap all code at 100 characters"

Once I finished my changes, authored my tests and made sure all looked well from my side, I ran through the Jasmine and Karma tests and then made my pull request.  I signed the Contributor License Agreement and was delighted to watch TravisCI run through testing with green lights.  Everything worked without issue!  I was well on my way to becoming an AngularJS contributor and waited on bated breath for my changes to be accepted and merged

90 minutes later, I noticed some activity on my pull request.  Could it be so soon?  I read through the comments and was aghast to find that a couple of odd grunt requirements had snuck into my package.json file, and that the indentation in my routeSpec.js was off.  With tears in my eyes, I went back to my project and corrected my spacing and grunt dependencies. I updated my pull request and began the process of waiting ...and waiting ...and waiting.  For 5 days.

On the 5th day, I meagerly commented and stated that I believed I had satisfied all the various requirements and to please let me know if anything was outstanding.  

The next day I received the following:

My feature had been deemed important enough that it was reclassified as a fix and backported to earlier versions of the framework.  Victory!

Enjoy your new optional wildcard route parameters!

Tags: , ,

Angular Route Processing ...and processing ...and processing ...and...

by Sean Murphy 30. November 2015 08:27

Recently I was engaged in an AngularJS project that spent a little time switching between views.  Okay, well the term "a little" is pretty subjective.  As we all know, the more thumb-twiddling a user does while waiting for a response from a server, the less likely they are to use your app, or at the very least - use it with growing frustration levels.  Even if the server responds within a second or two, a locked up UI always leads to a confusing and frustrating user experience where the user generally becomes click-happy and forces the server to start processing more requests, which leads to longer wait times, which leads to higher wattage use at the server, which robs the user's microwave of the necessary power to efficiently cook the Hungry Hombré Heat-n-Eat Burrito for lunch, resulting in starving users, angry mobs, broken windows and unemployment ... and you wouldn't want that, would you?  WOULD YOU?

But I digress ...

I found myself in a somewhat familiar pattern in this project where upon a route matching one of my when conditions, it would tee up the view template and controller, which then we were off to the races:

  1. The controller would call a method on an injected service to getSomeData using some route parameters using the injected route provider.
  2. The service would place a call to a RESTful API.
  3. The API would be busy making coffee, shopping for groceries, or whatever APIs do when consuming applications need them most.
  4. The UI would sit there and look stupid while waiting for the API to get aroung processing the request.
  5. Finally, the API would respond, the promise would be resolved, the $scope would get loaded with the appropriate data model and the UI would render.
  6. The user had starved to death by this point, so none of it really mattered (kidding .. no users were harmed in the making of this application).
To help keep the angry mobs at bay, I found myself splitting my views up into a content div and a loading-spinner div.  Each div used an ng-if and a scope level variable "isLoading" to determine their existence in the DOM.  When the controller first fired up, isLoading was immediately set to "true," showing the loading spinner.  As the promise from the service call was resolved successfully, the isLoading variable was set to "false," hiding the spinner and showing the view content.
After going through this drill in a couple of views, my OCD for code-reuse started hurting my brain.  I was also dissatisfied with the tight coupling of the controller to the service and route provider (or even an alternate I tried with injecting a model that made its own service calls).  I determined that I needed to change up my code a bit to honor the following goals:
  1. I should resolve the model for my controller as the route is processed, thereby injecting a fully resolved model into my controller relieving my controller of the responsibility of loading data and managing the wait times.
  2. A loading spinner should be setup as a directive, and placed in my main layout template making it reusable and available to all views.
  3. The loading spinner directive should leverage $rootScope to intelligently know when a route is being processed (I know, I know ... just remember, the axiom of "Use $rootScope sparingly" != "Never use $rootScope or you WILL DIE")
The first order of business was to change up my routing so that when the user wanted to view a site's details, that particular route would setup the view, the controller, and resolve the model dependency as well.  The siteModel has a method for getSite that makes a service call over $http to fetch some site details from the server.  By the time the controller loads in this instance, the "site" object should be fully resolved.
(function () {
    "use strict";
        ["$routeProvider", "$locationProvider",
            function ($routeProvider, $locationProvider) {

                .when("/sites/:id", {
                    templateUrl: "/app/components/site/siteDetailsTemplate.html",
                    controller: "siteDetailsController",
                    resolve: {
                        site: ["siteModel", "$route", function (siteModel, $route) {
                            return siteModel.getSite($;

A quick look into our siteDetailsController shows that the controller is being injected with the resolved "site" and knows nothing about loading the site, or turning on/off a loading spinner, etc.


(function () {
    "use strict";

        "siteDetailsController", [
            function ($scope, $location, site) {
                $ = site;



Here is the directive for our loading spinner.  Note that it uses the $rootScope to listen for route change events.  When these events are fired, a variable ("isRouteLoading") is set to the appropriate boolean value.


(function () {
    "use strict";
        .directive("myappWaitIndicator", [
            function ($rootScope) {
                return {
                    restrict: "C",
                    templateUrl: "/app/directives/waitIndicator/waitIndicatorTemplate.html",
                    link: function (scope, element, attributes) {
                        scope.isRouteLoading = false;

                        $rootScope.$on('$routeChangeStart', function () {
                            scope.isRouteLoading = true;

                        $rootScope.$on('$routeChangeSuccess', function () {
                            scope.isRouteLoading = false;


Our wait indicator template.  Note the use of the "isRouteLoading" to toggle the presence of this markup in the DOM


<div class="processing-overlay" ng-if="isRouteLoading">
    <div class="spinner"></div>



I've styled the various bits of the wait indicator to provide a fixed overlay and animated gif..


.processing-overlay {

.processing-overlay div.spinner {
    background: url(/Content/img/processing.gif) no-repeat 0 0;



Here I've landed the wait indicator in the main layout.  

_MainLayout.cshtml (snippet)

<section class="main-section">
    <div class="myapp-wait-indicator"></div>
    <div ng-view >

The end result is a nice wait message overlay whenever a route change is in progress!

Tags: , ,

AWS VPN - Lasso Your VPC w/ AWS VPN Connections (Part 1 of 2)

by Sean Murphy 5. May 2015 21:53

What IaaS Means To You

These days Infrastructure-as-a-Service has almost become as common-place as the ubiquitous Software-as-a-Service. Particularly in the last decade as the concept of Cloud Computing has evolved to become a more mainstream, affordable and sensible option. Small organizations looking to leverage economies of scale are no longer faced with stacking costly servers into environmentally controlled back rooms along with the payroll of the IT staff required to maintain said servers. For the price of the monthly power bill used to run on-premise equipment, organizations can deploy their infrastructure into the cloud, and immediately gain all of the benefits of a large-scale data center, boasting features such as "geographic redundancy" and "high availability" which were terms previously limited to an exclusive club of deep-pocketed organizations.

As organizations begin to migrate infrastructure to a permanent home in the clouds, certain questions often come into play. How much of an organization's infrastructure can be moved? What equipment should remain on-premise and why? As internet service providers continue to offer increasingly speedy connections (a business-class Comcast connection at 100/20mbps in my area comes in at about 25% the cost of a traditional T1), it becomes completely feasible to operate practically all of your back office infrastructure in the cloud.

Needs of the Local Area Network

The ability to shift back office infrastructure into a highly available cloud environment is great, but the aforementioned questions still remain. How much of that infrastructure makes sense to migrate? Despite the advancements in technology, organizations still have offices with employees that require computer workstations to perform their jobs. These systems require network connectivity not only to the internet, but to each other and to the back office infrastructure for simple things such as file sharing, centralized IT management, authentication, and last but not least - the mission-critical applications a business runs on such as CRM and ERP systems.

Picking up the entire back office and moving it to the cloud would wreak havoc on an office full of workstations pining for a server to talk to. Unless those workstations were unable to tell that the back office was anywhere other than the next room over.

Lassoing the Cloud with VPN

Fortunately for the sake of the office, VPN is here to save the day. The concept of always-on site-to-site VPNs is nothing new - they have been used to connect remote offices to each other and to corporate HQ since the advent of leased lines and frame relays. Thankfully most cloud service providers, such as Rackspace and AWS, provide VPN Gateway products to establish a cloud end-point to connect your office to the cloud in an always-on fashion. If your cloud service provider does not have this product available, it is possible to roll your own VPN Gateway using OpenVPN, which I will cover in part 2 of this article.

For this article, I'll be covering the process and procedure to connect an office LAN behind a ZyXEL USG 50 to an AWS VPC. When setting up a VPN with AWS it is often noted that the ready-made configuration files offered by AWS are suited for a class of device on par with a Cisco ASA device, and don't really cater to the smaller organizations who favor more affordable options such as SonicWalls and ZyXELs. We will be picking apart these configuration files and extracting the useful bits to enable our smaller-scale equipment to stand up a reliable connection.

Establishing the VPC

I will be configuring a VPC in AWS that consists of 2 subnets. One subnet will be public and serve as somewhat of a DMZ. The servers in this subnet will have their own elastic IP addresses by which they'll access the internet. The other subnet will be private and obscured behind an EC2 instance serving to provide NAT services. EC2 instances in both subnets will be protected by general AWS security policies, as well as their own firewalls. EC2 instances within the private subnet will have an additional layer of protection afforded by the instance providing NAT.

Topology and Network Considerations

Before you begin, it is absolutely critical to be thoughtful about your topology and network approach/strategy. Once the VPC is created and filled with EC2 instances, you cannot alter the network configuration without tearing the whole thing down. Again, I cannot stress this enough - give careful consideration to how you plan to layout your network. My local office uses a with a 24 bit subnet mask ( for the network ID. All hosts in the 10.0.0.x network are considered "local" and will send/receive traffic on the 10.0.0.x network segment without bouncing through a default gateway.

To stay within my class-A address space, but to differentiate enough to be easily recognizable (and allow for large growth on my local and remote office LAN side), I'm going to establish my VPC at AWS using a CIDR of Within that VPC, I will establish two subnets - one at for public/DMZ servers and one at for private/NAT'ed servers. Both subnets will be readily available to my office LAN as I will be utilizing both tunnels to route traffic to both the network and the network, so fear not, faithful reader.  The reason I'm not routing to the network is due to a limitation with the ZyXEL router, which we'll cover in a moment.

Using the AWS VPC Wizard makes it quick and easy to spin up this VPC configuration. Just look for the option to create a VPC with a public and private subnet, with the private subnet accessing the internet by way of a NAT instance.

Once you have the configuration set the way you'd like, click the "Create VPC" button and AWS will tee up your public and private subnets along with an EC2 instance in the public subnet complete with a public Elastic IP that will act as a NAT instance for your private subnet.

Setting Up VPN Endpoints

Configuring the Far End

Now that our VPC is up and running with public and private subnets, we'll begin setting up the various bits that will make connectivity to our local office via IPSec tunnel possible.

  1. The first step is to create a customer gateway in AWS. The customer gateway represents the WAN side of our local office.

    • From the VPC Dashboard, click the "Customer Gateways" option, and then click the button to "Create Customer Gateway."

    • Enter the details about our local office. I've entered the WAN IP address of the local office router along with a name identifying the local office. I will also be specifying static routing as my office's router does not support Border Gateway Protocol (BGP).

    • Click the "Yes, Create" button and after a moment your Customer Gateway appears in the list as "available."
  2. Next we'll create a Virtual Private Gateway. This represents the router on the AWS side of the equation.

    • From the VPC Dashboard, click on "Virtual Private Gateways" and then "Create Virtual Private Gateway."

    • Give it name and click the "Yes, Create" button. After a moment, the new gateway appears in the list with a status of "detached."

    • Select the gateway and click the "Attach to VPC" button. Choose the VPC from the list and click "Yes, Attach."

  3. Now we'll create our VPN connection tying our customer gateway we created in step 1 to the virtual private gateway we created in step 2. With each office that we connect to the VPC, we'll need to establish an additional customer gateway and an additional VPN connection. Unless you're wanting to employ some specific access rules or routing strategies, you can stick with a single virtual private gateway for now.

    • From the VPC Dashboard, click on "VPN Connections" and then the "Create VPN Connection" button.

    • Name your new VPN connection in a manner that describes what it is connecting (I called mine Beaverton_AWS_Tunnel).

    • Select the virtual private gateway created in step 2 and the customer gateway created in step 1. Again, since my router doesn't support BGP, I'm opting for static routing.

    • Since I've chosen to employ static routes, I'll need to specify the network ID in CIDR notation of the local office that should be advertised to the VPC. In my case, I've entered

    • Click the "Yes, create" button, and after a few moments your VPN connection should appear in the list as available.
    I have occasionally run into issues here where AWS indicates an error citing that the VPN connection was created, but it was unable to setup the static routes. If the VPN connection fails to appear after a few moments, you may have to try creating it again. Otherwise, if it appears as expected, select it in the list and then click on the "Static Routes" tab below to verify that the static route you specified is indeed listed. If it is not, you may enter it manually here.

  4. The final step in configuring the VPN is to have a look at our routing tables and security policies. After we created the VPC, an EC2 instance was created for us in the public subnet to serve as a gateway for any instances we spin up in the private subnet. In setting up this NAT instance, a couple of routing tables and a security group was created.  

    The security group was flagged as the default security group for the entire VPC, allowing any EC2 instances that are members of this group to have all-traffic access to each other across both private and public subnets within the VPC. In order to allow access to the machines in this security group from machines on the other side of our VPN connection, we'll need to add an additional rule that opens the door for our local office network.  

    Additionally some route tables have been setup.  There's are two - one is used to route traffic out of the internet gateway for the VPC.  This route table is used by machines in the public subnet to bounce traffic heading for non-local networks to the standard gateway.  The other route table is used by the private subnet, and routes traffic to the network interface on the NAT instance.   

    • From the VPC Dashboard, click on "Security Groups" and locate the default security group for your VPC.

    • Select the security group and then click on the "Inbound Rules" tab below.

    • Click the "Edit" button and then the "Add Another Rule" button.

    • Set this new rule up to allow "All Traffic" and "ALL" protocols from a source that is the CIDR of the local office, in my case -

    • Next, click on the "Route Tables" section and examine your route tables - there should be two.  Ensure that your subnets are explicitly associated with the appropriate table.  Often times, the private subnet does not have an explicit association and defaults to the "main" route table - which forwards traffic bound for to the network interface attached to the NAT instance.  Define this association explicitly.

    • Select each subnet and examine the Route Propagation tab.  This is used to advertise the Virtual Private Gateway created in step 2 within this route table.  Determine if you'd like visibility to the VPN gateway and if so, edit and click the checkbox to propagate the Virtual Private Gateway within the appropriate route tables. 

Now that we've configured all the pieces on the AWS side, the next step is to configure our local office to connect. We'll need some configuration settings to do this, and thankfully those settings are pretty easy to find.

From the VPC Dashboard, click on VPN Connections and then select the VPN connection you created in step 3. Note the "Download Configuration" button above the list. Click on this button and choose a configuration format that matches your hardware at your local office. If you're like me and using small business class equipment that isn't listed, choose the generic option and download the file.

Configuring the Local Office

At the local branch office, we're using a ZyXEL USG 50. This router is not listed in the available configuration files to download from AWS, so we'll be using the generic/platform-agnostic version of the configuration file. The generic config isn't really a configuration file per se, it's just a simple document that provides the settings necessary for setting up IPSec tunnels on any device capable of supporting an IPSec VPN.  

Amazon's VPN sets up two IPSec tunnels to provide failover.  Unfortunately the ZyXEL USG 50 is not capable of keeping both tunnels open at the same time when they route to the same subnet.  In order to make use of both tunnels, we'll use one tunnel to route traffic to our public subnet, and the other to route traffic to our private subnet.  The ZyXEL doesn't seem to mind if a packet sent down one tunnel is responded to over the other, and there's the added bonus of both tunnels being online at the same time - which removes the warning message on the AWS VPC dashboard that one tunnel is down, and also allows us to quickly establish a manual route if one of the tunnels should fail.


  1. First we'll need configure a couple of address objects to represent our different VPC subnets.  In the ZyXEL configuration page, navigate to Object -> Address and click "Add."
  2. Define an address of type subnet, one for each VPC subnet.  I created an "AWS_Public" and "AWS_Private" address subnet object of / and / respectively.
  3. Next we'll need to create the gateways.  In the ZyXEL configuration page, navigate to VPN -> IPSec VPN and click the "VPN Gateway" tab. Add a new VPN gateway.
  4. We'll setup a VPN gateway for each one of the AWS tunnels.  Click the link to "Show Advanced Settings," and then using the generic configuration document we downloaded at the end of the previous section, enter in the following elements:
    1. Peer Gateway Static Address (this is the outside IP address of the AWS Virtual Private Gateway for tunnel #1)
    2. Pre-Shared Key
    3. SA Lifetime (should be 28800)
    4. Proposal (should switch to AES128 / SHA1)
    5. Key Group (should be DH2)
  5. Repeat this same sequence of steps for Tunnel #2.  In the end you should have two VPN Gateways - one for each tunnel.

  6. Now it's time to setup the actual VPN Connections.  On the ZyXEL Configuration page, click VPN -> IPSec VPN. Click "Add" to create a new VPN Connection.
  7. We'll create a new VPN connection for each of the VPN Gateways we created in step 4 and 5.  The difference is that the remote policy for each will be slightly different - one VPN connection will route traffic to our public VPC subnet, and the other will route traffic to our private VPC subnet. Click the "Show Advanced Settings" and then enter the following elements:
    1. MSS Adjustment - Custom Size (as specified by the configuration doc from AWS - should be 1387).
    2. VPN Gateway (choose one of the tunnels created previously).  I've chosen to route traffic on my Tunnel 1 to the public subnet.

    3. Set the Local Policy to your LAN subnet.

    4. Set the Remote Policy to the corresponding AWS subnet (e.g. AWS_Public) you wish to route traffic to over this tunnel.

    5. SA Lifetime (should be 3600)

    6. Proposal (should switch to AES128 / SHA1)

    7. Perfect Forward Secrecy (should be DH2)

    8. Ensure zone is IPSec_VPN zone. 

    9. Additionally - we'll need to enable a connectivity check to prevent AWS from tearing down the VPN tunnel after a period of inactivity (nothing like users coming into the office in the morning and not being able to connect to servers in the VPC).  This is done easily - just establish a period ICMP packet sent to a host within the subnet you're routing traffic to.  This ping will keep the tunnel open.  I've set mine to ping the NAT instance every 30 seconds.

  8. Repeat these same steps for the second tunnel, only be sure to route your traffic to the second AWS tunnel and subnet.  If you try to route traffic to the same subnet as the previously created connection, the ZyXEL will refuse to keep both connections up and running at the same time.

  9. The final step is to allow traffic back to our LAN from the VPC.  In the ZyXEL configuration page, click the Firewall link and "Add" a new rule to allow traffic from the IPSec_VPN zone to the LAN zone (LAN1 in my case).


Validating the Connection

At this point, you should have a fully functional site-to-site VPN connection to your AWS VPC subnets from your local office ZyXEL router.  Most routers that are capable of constructing IPSec VPN tunnels should be able to perform this feat, but depending on the router's feature-set, your mileage may vary.

Note the Blue/Purple "Connected" status icons on each VPN connection in my ZyXEL.




AWS reports the both tunnels are online.


Pings being sent / received from my LAN to machines in both private and public subnets.


Pings being sent / received from machines in both private and public subnets on the VPC to a machine my LAN

In the next article, we'll discuss setting up an OpenVPN server to handle VPN connections from dynamic clients.  This will help to serve the members of your office who may be working from home or working in the field.  Until then, happy VPNing!

Tags: , , ,

PHP, Eclipse and the Zend Framework

by Sean Murphy 22. October 2012 13:59

So I jumped at the idea of working on a PHP/Zend project.  I mean, who doesn’t relish the thought of developing in a weakly-typed language that seems more like a virtual explosion of the key-word and function factory than an actual language alongside a bloated, overly complex MVC framework?  Throwing caution to the wind, I dove in head-first without so much as checking the temperature of the water … or if there even was water to begin with.

First things first, I would need an IDE.  The previous developer had established the project using Komodo 7, an IDE that’s dedicated to PHP development and works well with the Zend Framework, which was the MVC library this existing project was built upon.  Installing and configuring Komodo was relatively painless; the trial version provided a full-featured IDE that wasn’t hobbled in any way.  Wait a minute - I said “trial” didn’t I?  With a trial period of only 21 days, if you plan to be developing for more than 3 weeks, you’ll soon reach the end of that road where the barriers are adorned with $300 price tags.  I’m sure most would agree that this is always a nice thing to be presented with in the open-source world…or not.

With my trial period expired and tears in my eyes, I set my sights set on something more traditionally open-source:  The 900lb gorilla of the Java world - Eclipse.  Some research yielded the presence of a plugin known as the “PDT” – “PHP Development Tools.”  Using Eclipse’s “Install New Software” feature was easy to pull down the PDT and get it setup and working.  With the PDT installed, I setup my project workspace and checked my project out from the svn repository.  

The PDT added a new “PHP” perspective to my Eclipse IDE, which worked quite well for editing my PHP source.  Now, to be fully functional, I only had a few tasks left to tackle:

  • Add a reference to the Zend Framework to the project’s Include Path
  • Install some DBGp-compliant debugger

Adding the Zend Framework to the include path was a two-step process.  For the first step, I had to add the Zend Framework library as a general “PHP Library.”  To accomplish this in the IDE, I navigated to Window -> Preferences, expanded the “PHP” section and selected “PHP Libraries.”  Clicking the “New” button allowed me to create a friendly name and placeholder for the library.  I named my library “ZendFramework v1.11” and clicked “Ok.”  Next, I clicked on my newly-created library item, and clicked the “Add External Folder” button.  This allowed me to browse to the folder where I had the Zend Framework installed (in my case, /usr/lib/ZendFramework/library).  Now that I had included the Zend Framework into the IDE as an available PHP library, I just needed to add a reference to it in my project’s include path.  In the IDE menu, choose Project -> Properties.  In the Properties dialog for the project, click on the “PHP Include Path” option on the left, and then select the “Libraries” tab on the right.  Click “Add Library” followed by the next button on the resulting dialog box.  Check the Zend Framework library, and click Ok and Ok.  Voila – your project now includes the Zend Framework.

The last step was to set up a debugger.  XDebug was already on the system and seemed to be the logical answer.  Opening the php.ini file in the /etc directory showed configuration for XDebug that all I needed to do was uncomment, and provide a path to the PHP Zend-friendly xdebug extension (which I found in /usr/lib64/php/modules/  The final step was to tell Eclipse to use XDebug.  You can find the debugger configuration information under Window -> Preferences -> PHP -> Debug.  Set the debugger to be XDebug.  If you’ve set the debugger to run on a port other than the default 9000 that the php.ini file specifies, you’ll need to change which port Eclipse uses by clicking the “Configure” link next to the PHP Debugger dropdown.

I added a new debug configuration under Run -> Debug Configurations -> PHP Web Application.  Again, here I specified that I would be using XDebug, and then provided a default entry-point into my project by browsing to my root index.php file, and then unchecking the “Auto Generate” checkbox under “URL”, and providing an MVC-compliance url.  Click the “Debug” button would launch my web app at the indicated URL, and the code would break on my established breakpoints.

So now I’ve been happily coding along in PHP with Eclipse.  Every day it reminds me of why I like Visual Studio and the .NET Framework so much.  Now it’s just a matter of actually getting this pushed over into QA.  Hey, who drained all the water out of the pool?

Tags: , ,


About the Author

Sean Murphy is a Solutions Architect for Axian, Inc. in Beaverton, Oregon.

With over 20 years of experience in a broad range of technologies, Sean helps Axian clients realize functional solutions to their business challenges.