You don’t always have to use an existing framework/toolset/platform to accomplish a common task. You’re a coder. You can code it.
I was reminded of this in two different scenarios recently.
Unit Testing
I’ve been taking a lot of algorithmic coding tests for potential employers on sites like HackerRank, Codility, CodeSignal, etc. Usually it works like this:
- You have a problem: given these types of inputs, produce this type of output
- You are given constraints for the inputs
- You are given example inputs and the expected output
- There is some scaffolding code to read the input and print the output
- There is some way for you to enter custom input for testing
- Sometimes there are some basic test scenarios that you can see that your code will go through
- Always there are unseen test scenarios that your code will go through
One big problem here is that when trying to test (and retest, and remember) all of the edge cases, manually entering custom input data isn’t very efficient or reliable for catching regression issues. You problem won’t have access to a unit testing framework or assertion library, and there won’t be a lot of time to comment, uncomment, or copy/paste/move code. But you can still whip up something to quickly add cases and repeatedly execute them. Usually the inputs and outputs aren’t that complicated - consider this setup for a hypothetical problem based on the Bowling Kata:
static int CalculateScore(string[] marks)
{
// put your code here
}
static void Main(string[] args)
{
// this is scaffolding that invokes your code and you typically won't change it
// sometimes it reads from/writes to file paths taken from environment variables
// or something else that's specific to your sandbox
var input = Console.ReadLine();
var marks = input.Split(' ');
Console.WriteLine(CalculateScore(marks));
}
In just about a minute we can redirect to a method with our main algorithm and call a method that tests various cases before execution using the output as an indicator for pass/fail.
static int CalculateScore(int n, string[] marks)
{
RunTests();
// put your code here
return Calculate(marks);
}
static int Calculate(string[] marks)
{
// I'm totally not implementing this
return 0;
}
static void RunTests()
{
Case(300, "X X X X X X X X X X X X");
Case( 0, "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0");
Case( 20, "1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1");
}
static void Case(int expected, string marks)
{
var result = Calculate(marks.Split(' '));
Console.WriteLine("{0} {1}", expected == result ? "PASS" : "FAIL", String.Join(' ', marks));
}
We could just as easily throw an exception on failure. Doing it this way might be a bad idea for performance-intensive cases or when the calling framework is executing many different scenarios (these things are timed, after all). And in this case it would have been okay to alter the scaffolding code instead of duplicating the string split… the point is, it’s not hard to find a way to use code to provide rudimentary tools which you would definitely have in the real world. The same principle applies when writing a quick command-line utility with some testable functionality - in lieu of setting up more traditional tests, you can just add a command line switch that diverts execution to some method that runs all of your scenarios. At the end of the day, all you need to write a unit test is a comparison operator.
Deployment
I have an ASP.NET Core app that functions as a proxy for a couple of service endpoints that some applications I have running on a Raspberry Pi connect to. The functionality isn’t important for this post. I develop it on a Windows machine, and while it could run anywhere on my local network with a port accessible to the RPi, it might as well run on the RPi itself. I don’t modify it often, but when I do, the manual process of deploying it is a pain:
- Build/publish to a folder via Visual Studio with the appropriate profile (target netcoreapp2.2, linux-arm, etc.)
- Modify the runtimeconfig.json in the output
- Connect via SFTP to the RPi to upload the output
- Optionally restart the system.d service (which was created as a one-time step on the initial deployment)
I figured out how to modify the runtimeconfig.json with project settings, but the manual SFTP upload was still by far the most painful part of the process. I wanted to be able to just one-click deploy from my laptop to my RPi. So I guess I’ll install an Octopus server, figure out if and how tentacles work on RPi, configure deployment steps, … no. NO. Or I could just write a custom MSBuild task, right? NOOOO.
I told myself, “just write some code.” I added a new console project to the solution (Whatever.Deploy.Cmd
). I added a method that basically called dotnet build
in a new process and showed the output. I pulled down SSH.NET from NuGet and wrote a method that uploaded the output to the RPi via SFTP. Then I wrapped it up with some error handling and display logic, added a short script to invoke the deployment program, and voila! I can right-click and select ‘Execute File’ on my deployment script in Visual Studio (alternatively Shift-Alt-F5) and ship an updated release to my Raspberry Pi.
Here are what my hand-rolled deployment tasks look like:
class BuildRelease : DeploymentTask
{
protected override IExecutionResult ExecuteTask()
{
using (var build = new Process())
{
build.StartInfo.FileName = "dotnet";
build.StartInfo.Arguments = @"build Whatever.WebApi.csproj -c Release /p:PublishProfile=FolderProfile /p:DeployOnBuild=true";
build.StartInfo.WorkingDirectory = @"C:\path\to\Whatever\Whatever.WebApi";
build.StartInfo.RedirectStandardOutput = true;
build.StartInfo.RedirectStandardError = true;
build.StartInfo.UseShellExecute = false;
build.StartInfo.CreateNoWindow = true;
build.Start();
Console.ForegroundColor = ConsoleColor.DarkGray;
Console.WriteLine(build.StandardOutput.ReadToEnd());
Console.ForegroundColor = ConsoleColor.Gray;
build.WaitForExit();
if (build.ExitCode != 0)
{
return new Failed(build.StandardError.ReadToEnd());
}
return new Succeeded();
}
}
}
class CopyToRaspberryPi : DeploymentTask
{
protected override IExecutionResult ExecuteTask()
{
using (var client = new SftpClient("XXX.XXX.XXX.XXX", "user", "password"))
{
Console.ForegroundColor = ConsoleColor.DarkGray;
Console.WriteLine();
client.Connect();
client.ChangeDirectory("/deployment/destination");
var sourceDirectory = @"C:\path\to\Whatever\Whatever.WebApi\bin\Release\netcoreapp2.2\publish";
foreach (var localPath in Directory.EnumerateFiles(sourceDirectory))
{
var info = new FileInfo(localPath);
var progress = new FileProgress(info.FullName, (ulong)info.Length);
var filename = Path.GetFileName(localPath);
using (var file = new FileStream(localPath, FileMode.Open, FileAccess.Read))
{
client.UploadFile(file, filename, true, size => progress.WriteProgress(size));
}
}
Console.ForegroundColor = ConsoleColor.Gray;
client.Disconnect();
}
return new Succeeded();
}
private class FileProgress
{
private readonly string localPath;
private readonly ulong totalBytes;
private bool hasCompleted;
private static readonly object sync = new object();
public FileProgress(string localPath, ulong totalBytes)
{
this.localPath = localPath;
this.totalBytes = totalBytes;
}
public void WriteProgress(ulong bytesUploaded)
{
lock (sync)
{
if (!hasCompleted)
{
var percentComplete = Math.Round(((decimal)bytesUploaded / (decimal)totalBytes) * 100m).ToString().PadLeft(3, ' ');
Console.CursorLeft = 0;
Console.Write($"[{percentComplete}%] {localPath}");
if (bytesUploaded == totalBytes)
{
Console.WriteLine();
hasCompleted = true;
}
}
}
}
}
}
And here is the deployment script:
REM build the actual deployment project
dotnet build "C:\path\to\Whatever\Whatever.Deploy.Cmd.csproj" -c Release
REM run the deployment project
dotnet "C:\path\to\Whatever\Whatever.Deploy.Cmd\bin\Release\netcoreapp2.2\M3UTransformer.Deploy.Cmd.dll"
REM the deployment project will stop and wait for input if
REM anything goes wrong. if everything is fine, we're done
exit
The solution is far from perfect. There are hardcoded paths in a few places, I’m not doing anything with the service startup (yet), and the script should probably be checking exit codes or something, but overall it was a quick solution for adding automation to something that was painful to do manually with just a few dozen lines of code.
Comments