Ben Heymink

Software Developer - Javascript/Angular/node/C++/C#/MAPI/Outlook

Author: admin (page 1 of 3)

A simple Angular RBA directive

It’s trivial in Angular to write small, reusable (and testable!) components to accomplish simple tasks that you might use throughout your application. One such component I had to write the other day was a simple Angular directive to dynamically show or hide (or rather, remove) page elements based on the currently logged in user’s role. The directive itself is crazy simple:

It has a single dependency on a service responsible for reporting the current user’s role, which in my application returns an array along the lines of [100, 101, 999] (Where each number indicates a specific role assigned to that user). The directive also expects a single argument; the permitted roles for the page element that we’ve attached the directive to. Imagine we use the role ID ‘101’ to indicate an admin; the directive usage might look something like this:

Within the link function of the directive, i’m using Lodash to determine the intersection of the roles passed to the directive and the actual roles for the user; the net result being an array of roles for the user that match any roles passed to the directive. If that array is non-zero in length, then at least one of the roles match, and there is nothing for us to do, we’ll let the browser render the element as usual. If there are no matching roles though, we can simply delete the element that the directive is attached to, removing it from the page.

Obviously our role checks are also performed on the backend to ensure no one is accessing something they shouldn’t, but this is a simple way to show/hide elements in your app from users.

Understanding Protractor test Promises

Protractor, a tool for e2e testing of Angular-based applications, can be a great tool in your development pipeline. However, it can also take a bit of searching to work out exactly what it’s doing under the hood and how it can affect your tests. Simple tests such as the one below are easy to follow and don’t really cause beginners any issues:

In the above example, we task Protractor with getting the browser title with a call to ‘getTitle()’, then simply add an expectation that it should match our anticipated response.

Under the covers, each of the lines above is actually an asynchronous operation, and when each line is executed, the action is added to a queue to be executed at some point. In fact, if you explore the documentation here, it specifies:

WebDriverJS (and thus, Protractor) APIs are entirely asynchronous. All functions return promises.

Behind the sciences, WebDriver, the underlying component driving your e2e tests, maintains this queue of promises, called the ‘Control Flow’ in order to keep everything executing in the correct order. Protractor actually modifies Jasmine so that each test spec waits until this ‘control flow’ queue is empty before exiting.

Jasmine expectations are also adapted to understand these promises. That’s why the last line in the example above works – the code actually adds an expectation task to the control flow, which will run after the other tasks:

expect(browserTitle).toEqual(‘My Title’);

Writing tests without knowing that Protractor works in this way can lead to some puzzling results; Imagine if you have a dynamic number of elements on the page and you wish to check to see if they have all been drawn in the browser; your first, naive implementation might look something like this:

In this example, we’re iterating over some elements, calling ‘isPresent()’ to check if they exist on the page, then testing that hopefully they all exist, and that no calls to ‘isPresent()’ returned false.

This test won’t work though – the call to ‘isPresent()’ returns a promise, not a true/false value! How can we fix it? Like this:

Here we form an array of the promises returned from each call to ‘isPresent()’, then wait for all of them to resolve before we check the values.

65daysofstatic

Took me a while, but I managed to track down the music used in the No Man’s Sky trailer shown at E3 this year:

New music

Hard hitting Jazz meets electronica with this three piece Manchester band. Enjoy!

Creating an internet-connected, build-monitoring, sweet-dispensing machine

We held an informal evening hackathon at work the other day and I set to work finally doing something interesting with my Raspberry Pi and a sweet-dispensing ‘thing’ I’ve had on my desk for a while. You can see the end result below, I basically ended up with a device that continuously monitors our builds at work, and should one ever succeed, it dispenses some smarties/M&Ms on to my desk whilst updating a number of LEDs showing the current build status. In addition it uses a Flask-based web server to allow me to change the current build project it monitors.

photo

So how was it done?

The root of it all is my raspberry pi, running nothing but the supported debian image from the Raspberry Pi site. Attached to that is an assembled Gertboard, connected to the Pi’s GPIO pins.  I’m using the Open Collector driver component on the Gertboard, which allows me to switch the on-board motor in the sweet dispenser whilst using it’s own power supply. In addition, I’m using three of the buffered I/O pins from the Gertboard to control the red, green and yellow LEDs. Once it’s all wired up, a Python/Flask app controls the Open-collector and Buffered I/O using Gordon Henderson’s WiringPi library.

Twilio, Python & Flask – Best Friends

I’ve been playing around with ‘Twilio‘ recently, it’s a cloud communications provider that allows developers access to a platform that lets them send and receive SMS, calls and more. Since they also provide some nice Python bindings around their API, I decided to sign up and give it a go.

After signing up you get a trial account that lets you send SMS and calls to a verified number (i.e. one that you can prove you own). Additionally they seem to provide you with a fair amount of free SMS messages out of the box to play with. If you want to get serious, you can add some funds and start sending more messages/calls to unverified numbers as well. Once signed up, you get access to your own ‘Dashboard’ on their website that provides you with all the help and documentation you need. You’ll also get an allocated phone number, accound SID and auth token that will allow you to talk to their API, and with these in hand you’re ready to begin.

I’m a recent virtualenv/pip convert, so I’m using those to manage my isolated python environment and to install dependencies. So, in my fresh virtualenv environment, the first thing to do is to use pip to grab  the Twilio package we need:

venv>> pip install Twilio

After we’ve got the Twilio package we need, we can write our dead simple python app that will send an SMS:

Pretty simple! We import the TwilioRestClient, which is how we’ll communicate with their API, we specify our account number and token (grabbed from your Twilio Dashboard), construct an instance of the TwilioRestClient, then I just have one simple method that sends the message to my phone.

Now lets gets fancy- if we bring Flask  into the mix, we can start crafting a simple web page that can accept some user input then send a message. If you haven’t heard of Flask, its a dead simple web framework for python. Let’s continue with the previous code and start by installing Flask into our virtualenv:

>>pip install Flask

With this done, we can add Flask to our previous code:

So, all I have done here is pull in the Flask package, render_template (we’ll use it later) and request (again, we’ll use this later).  On line 4 I create an instance of the Flask class, passing it the name of the module, which it needs to know in order to look for html files etc. (which we’re not currently using). On line 16, I’ve removed the explicit function call and replaced it with ‘app.run()’, specifying the address to listen to requests on, and a port number. When this line is invoked it starts a local server with our app.

Once running, any web requests to the address we’ve specified will be routed to our application, so we have to make one other change in order to be able to handle the requests; on line 9 I use the Flask route decorator to tell Flask what URL should trigger this function; in this case the ‘/’ I’ve used handles any requests to our application at http://127.0.0.1/. With all this done, we can now run the app from the command line:

>>python TwilioApp.py
* Running on http://0.0.0.0:5000/

And that’s it. If we open a browser and navigate to http://127.0.0.1:5000 the app should send a message to my phone and return the text “Success”.

Let’s get even more clever now, and set up a couple of static HTML pages – one to accept some text from a user, and another to replace our boring “Success” text. The two files we’ll use, form.html and success.html are below:

Nothing fancy here, just a form with a single text box and a button that will invoke a POST back to our app. Once you have these two files, chuck them in a folder named ‘templates’ next to the python code (it’s where Flask will look for them). Next, we do one final update to our previous code to use the new HTML files:

Nothing too clever here, our previous @app.route(‘/’) on line 9 now uses the Flask render_template call to return our first form, and then we have another Flask decorator on line 13 that responds to POST actions; this one pulls the text out of the request.form field ‘Message’ and uses it in the SMS request to Twilio. Run the app again, navigate to http://127.0.0.1:5000 and you should see the following:

Twilio

And after entering some text and hitting ‘Send’, you should receive the message and be presented with the following screen:

Twilio2

Simples!

Enterprise Vault – Determine archive range

There was a post the other day on the Symantec Connect forums, here where somebody was wandering if there was a way to get hold of the size of an archive. Whilst Enterprise Vault has a webpage that an administrator can navigate to in order to see this (usage.asp), there is no readily available means for an end-user to access this information. However, by utilizing some of the other web pages used for Vault Cache synchronization, we can obtain this information:

Step one:- Identify the web page we are interested in and what information (if any) the client sends up to the server

Using a freely available tool, Fiddler I performed a reset of a user’s Vault Cache then performed an initial synchronization whilst Fiddler monitored the web traffic. The web page we are after is the call the client makes to ‘GetVaultInformation.aspx’. Examining the request in Fiddler we can see that the client sends up a small amount of data with the request; an ‘action’ code and the users Archive ID:

Step 2:- replay the call using another users information

From here, it’s easy to wrap the same call into a little python script that can exercise that web page and report on the archive information: (Note I’m using Requests to handle the page request)

Running the script and adding the relevent user information, we get back the archive information we were after:

That’s it!

MSBuild Javascript minification

As part of some recent work I did it was decided that some Javascript files we had written needed ‘minifying’ as part of a build step. For those who have never heard of that term, it’s possible to dramatically reduce the size of Javascript files (whilst keeping the behavior unchanged) by performing certain steps against the file; removing unnecessary white space, comments, line breaks, etc.

Microsoft Ajax Minifier

We ended up using a tool from Microsoft, available here, that allows for minification of both JS files and css files, perfect for us. Once downloaded, the tool has a simple command line usage:

>ajaxmin.exe sourcefile.js -out outputfile.js

This is perfect for experimenting with the tool and validating the output, something I did early on to ensure there wouldn’t be any issues with the resultant files.

Once downloaded, the tool also includes an MSBuild task that can be imported and used in Visual Studio projects to automatically minify your JS and CSS files as part of a project build, something we needed. My requirements were a bit more complicated however, due to the fact that I didn’t want the minified files in source control, I required the files to maintain the same name, and required the minified files to be spat out in to a new ‘output‘ folder as part of the build. Below are the steps I took to achieve this:

Set up the project to use AjaxMinTask.dll (The MSBuild task)

The target project for us was an existing ASP.NET Web Application that contained a couple of web pages and our target JS files. With Visual Studio open, the first thing to do is to unload the project so we can manually edit the .csproj file. In order to do this, you simply right-click on the project and select ‘Unload Project‘. With the project now unloaded, you should be able to right-click the project again and now select ‘Edit myProj.csproj’ (Or whatever your project is named).

When the resultant file opens, scroll down near to the bottom and you should see a piece of commented out text:

— To modify your build process, add your task inside one of the targets below and uncomment it.

It’s that commented out block of text that we need to replace with our custom build step. Let’s build up my minification soloution step by step:

Step 1: Removing any existing Minified Files (Cleaning)

Because of some limitations with our source control and the version of AjaxMin we are forced to use, it’s not possible for us to overwrite any existing minified files, so the first thing we do is to blow away any existing minified files. Add the following MSBuild target to the .csproj file, directly overwriting the commented out text mentioned above:

<!– STEP 1: DELETE THE OUTPUT DIR IF IT ALREADY EXISTS –>
<Target Name=”AfterBuild”>
<RemoveDir Directories=”$(MSBuildProjectDirectory)\Resources\OUTPUT\” ContinueOnError=”True” />
<CallTarget Targets=”CopyJSAndCSS” />
</Target>

Let me explain for a moment what the above block does. The Target element simply describes one or more ‘tasks’ which will run as a group, perhaps to rename some files, move some files, or in the case above, delete some files. The attribute

Name=”AfterBuild”

Ensures that this ‘target’ will run after the project has been built. You’ll see later that my other ‘Target’ blocks have different names, but this first one, the entrypoint if you will, has the reserved name ‘AfterBuild.

The next line calls a built-in MSBuild task ‘RemoveDir’ and passes it a collection of directories (folders) to delete. Here, we only pass it one folder, our ‘Output’ folder that we will later spit out our minified files to. The use of the built in property ‘$(MSBuildProjectDirectory)’ provides a way to get the actual path of the current csproj project file, which out Output file is relative to. The attribute ‘ContinueOnError’ ensures that even if the folder doesn’t yet exist, the task will complete happily.

The final line explicitly calls our next ‘Target’ by name. In this way we can chain together multiple Targets ensuring they get run in the order we want. Here, once this target has completed removing the Output folder we call the next target, ‘CopyJSAndCSS’:

Step 2: Copying JS/CSS files ready for minification

Add this next target directly under the one you previously added:

<!– STEP 2: RECURSIVELY COPY .JS & .CSS FILES TO OUTPUT DIR, MAINTAINING DIR STRUCTURE –>
<Target Name=”CopyJSAndCSS”>
<ItemGroup>
<SourceFilesToCopy Include=”$(MSBuildProjectDirectory)\Resources\**\*.js;$(MSBuildProjectDirectory)\Resources\**\*.css” />
</ItemGroup>
<Copy SourceFiles=”@(SourceFilesToCopy)” DestinationFiles=”@(SourceFilesToCopy->’$(MSBuildProjectDirectory)\Resources\OUTPUT\%(RecursiveDir)%(FileName)%(Extension)’)” />
<CallTarget Targets=”MinifyFiles” />
</Target>

This next target handles copying our JS and CSS files into our Output folder, ready to be minified. The ‘ItemGroup’ element in the above target allows us to define one or more user defined  ‘Items’, each with their own attributes. In the ItemGroup above I have defined an object ‘SourceFilesToCopy’ that has an ‘Include’ attribute, an attribute required by a later task I will call that takes an item describing a list of files. The ‘Include’ attribute above actually describes two sets of files, my JS files and my CSS files, seperated by a semicolon. the \**\*.js and \**\*.css syntax ensures that we match all the files under the ‘Resources’ folder, including all files in any subfolders.

With the ‘SourceFilesToCopy’ item set up, we call another MSBuild built-in task, ‘Copy’. The ‘Copy’ line above takes an input (SourceFilesToCopy) and copys files to the ‘DestinationFiles’ attribute. The syntax in that attribute is simply performing a recursive copy, preserving the folder hierarchy, file name and extension. In effect this simply mirrors the folder and file structure of the ‘Resources’ folder to the ‘Output’ folder. With this Task complete, we call our next Target, ‘MinifyFiles’:

Step 3: Minification

Add the next Target directly below the last:

<!– STEP 3: MINIFY JS & CSS –>
<UsingTask TaskName=”AjaxMin” AssemblyFile=”$(SolutionDir)..\OurCustomMSBuildTasks\AjaxMin\AjaxMinTask.dll” />
<Target Name=”MinifyFiles”>
<ItemGroup>
<JSFilesToMinify Include=”$(MSBuildProjectDirectory)\Resources\OUTPUT\**\*.js” />
<CSSFilesToMinify Include=”$(MSBuildProjectDirectory)\Resources\OUTPUT\**\*.css” />
</ItemGroup>
<AjaxMin JsKnownGlobalNames=”jQuery,$” JSSourceFiles=”@(JSFilesToMinify)” JSSourceExtensionPattern=”\.js$” JSTargetExtension=”.min_js” CssSourceFiles=”@(CSSFilesToMinify)” CssSourceExtensionPattern=”\.css$” CSSTargetExtension=”.min_css” />
<CallTarget Targets=”DeleteOrigFiles” />
</Target>

Ok, this one is a longer one, since it’s actually doing the minification, so lets look at it in a bit more detail:

1) (UsingTask) In this first line we explicitly reference a third-party MSBuild task, in our case the task that AjaxMin supplies when you download it from MIcrosoft’s website.

2) (ItemGroup) In this Item group, I set up two file collections as before; a list of JS files to minify, and a list of CSS files to minify.

3) (AjaxMin) Here I actually call the AjaxMin task. The documentation for the arguments/attributes I’m passing here are well documented on the Microsoft web page, but briefly;

– We ensure that ‘jQuery’ and ‘$’ literals do not get renamed as part of the minification process

– We pass a list of JS files to be minified

– We pass the extension of the JS files that the AjaxMin task should look for/target

– We pass the extension we want for the minified  JS files

– We repeat the above 3 steps for our CSS files too.

With the files minified to their new .min_js and .min_css counterparts, we call our next target, ‘DeleteOrigFiles’:

Step 4: Delete Non-minified (original) files

Add the next step directly below the last one:

<!– STEP 4: DELETE NON-MINIFIED JS/CSS IN OUTPUT DIR –>
<Target Name=”DeleteOrigFiles”>
<ItemGroup>
<OriginalJSFilesToDelete Include=”$(MSBuildProjectDirectory)\Resources\OUTPUT\**\*.js” Exclude=”$(MSBuildProjectDirectory)\Resources\OUTPUT\**\*.min_js” />
<OriginalCSSFilesToDelete Include=”$(MSBuildProjectDirectory)\Resources\OUTPUT\**\*.css” Exclude=”$(MSBuildProjectDirectory)\Resources\OUTPUT\**\*.min_css” />
</ItemGroup>
<Delete Files=”@(OriginalJSFilesToDelete)” />
<Delete Files=”@(OriginalCSSFilesToDelete)” />
<CallTarget Targets=”RenameMinifiedFiles” />
</Target>

This next Target should be self explanatory by now, but all it does is delete the original .js and .css files we copied over to the ‘Output’ directory, ensure we dont delete the .min_js/.min_css files by using the ‘Exclude’ attribute on the file collections we set up. After deleting the original files, we’re onto the last step!

Step 5: Rename minified files

Add this last step directly after the last one:

<!– STEP 5: RENAME *.MIN.* FILES –>
<Target Name=”RenameMinifiedFiles”>
<ItemGroup>
<MinifiedJSFilesToRename Include=”$(MSBuildProjectDirectory)\Resources\OUTPUT\**\*.min_js” />
<MinifiedCSSFilesToRename Include=”$(MSBuildProjectDirectory)\Resources\OUTPUT\**\*.min_css” />
</ItemGroup>
<Copy SourceFiles=”@(MinifiedJSFilesToRename)” DestinationFiles=”@(MinifiedJSFilesToRename->’$(MSBuildProjectDirectory)\Resources\OUTPUT\%(RecursiveDir)%(FileName).js’)” />
<Copy SourceFiles=”@(MinifiedCSSFilesToRename)” DestinationFiles=”@(MinifiedCSSFilesToRename->’$(MSBuildProjectDirectory)\Resources\OUTPUT\%(RecursiveDir)%(FileName).css’)” />
<Delete Files=”@(MinifiedJSFilesToRename)” />
<Delete Files=”@(MinifiedCSSFilesToRename)” />
</Target>

This last step handles renaming our minified (.min_js/.min_css) files back to thier original file extensions. The syntax follows that of one of our previous steps (The copying of the files in step 2) but overrides the file extension of the recursive copy:

<Copy SourceFiles=”@(MinifiedJSFilesToRename)” DestinationFiles=”@(MinifiedJSFilesToRename->’$(MSBuildProjectDirectory)\Resources\OUTPUT\%(RecursiveDir)%(FileName).js‘)” />

And that’s it! Now, when we build that porject, we get an Output folder created that is an exact copy of our ‘Resources’ folder, but with the JS and CSS files minified.

Simple C# Outlook Add in

There was a question posted on the Symantec Connect forums the other day where someone was asking for some example code to enumerate items within Outlook and determine if they were Exchange mail items or items that have been archived by Enterprise Vault.

The code to do this is pretty simple actually, we can use the Outlook Object Model to navigate through folders accessing items and from there it is simply a case of having a peek at the PR_MESSAGE_CLASS property of the item and checking to see if it matches ‘IPM.Note.EnterpriseVault.Shortcut’.

To develop this code, you’ll need to set up a computer for Microsoft Office Development by following the steps listed here. Once you’ve done that, fire up Visual Studio, create a new project, and select the ‘Outlook 2010 Add-in’ (or 2007 add-in) template under the ‘Office’ section of Visual C’ project templates:

Name it something useful, then hit OK. Once the project has loaded, you’ll want to replace the contents of the template-provided file ‘ThisAddIn.cs’ with my code, which you can get from here.

After replacing the code, hit F5. This will build the soloution and add some registry keys that Outlook reads to load your add-in. To get rid of it, Right-click on the solution root and select ‘Clean Solution’.

That’s it! Simples!

 

Integrating Javascript unit tests with Visual Studio build

In the office where I work, automated Unit testing as part of our nightly build is the norm. We use a variety of test frameworks (NUnit, Google Test, etc.) in order to test a huge amount of native and managed code. Recently however, my team has been writing some fresh javascript code around a new  feature for Exchange 15 (which I can’t go into details about) and we faced a problem around writing and automating Javascript-based unit tests. Ideally we wanted our suite of (javascript) unit tests to be run along side the rest of the more traditional native and managed unit tests, from within our build environment (MSBuild) AND also be accessible to individual developers to execute as part of a local build step.

So how did we do it? The first step was getting hold of QUnit, a Javascript unit test suite capable of executing our tests; in fact, QUnit is apparently used to unit test not only itself, but also JQueryUI, JQuery and JQuery mobile. As per the site, tests are written within a Javascript file using the QUnit ‘Test’ construct:

test( "hello test", function() {
ok( 1 == "1", "Passed!" );
});

The file containing the above test can then be referenced from a simple HTML document, itself including the core QUnit files; a css stylesheet to style the test result output, and the QUnit javascript file, qunit.js, which will execute the tests:

1
2
3
4
5
6
7
8
9
10
11
12
13
<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8">
  <title>QUnit Example</title>
  <link rel="stylesheet" href="/resources/qunit.css">
</head>
<body>
  <div id="qunit"></div>
  <script src="/resources/qunit.js"></script>
  <script src="/resources/tests.js"></script>
</body>
</html>
(the referenced file, tests.js contains the javascript tests)
With the above setup, we can simply load the HTML document in a browser to see the results of the executed tests:

So that’s all great and groovy, but requires a manual step of launching the document in a browser in order to execute the javascript.

Enter Chutzpah, a command-line javascript test runner. It uses PhantomJS to execute javascript in a headless (read:no window) Webkit browser and also allows us to execute it from the command line. Usage is as simple as specifying the above HTML document as an argument to the Chutzpah runner:

chutzpah.console.exe test.html

Furthermore, we can omit the HTML document altogether and simply specify the raw javascript file! With this in our toolbox, it was as simple as writing a new batch file that we could call as a post-build step:

%~dp0\chutzpah.console.exe %~dp1%2 /silent /timeoutMilliseconds 20000

Our post build step then, looks like this:

IF "$(ConfigurationName)" == "Unit Test" call "$(SolutionDir)..\Common\UnitTest\Chutzpah\HandleJSUnitTests.bat" "..\MyTests\" MyTestFile.js

Older posts

© 2017 Ben Heymink

Theme by Anders NorenUp ↑