Mocking in PyTest (MonkeyPatching)

This blog post is a continuation of my last post regarding using PyTest and PyTest-Sanic plugin to test Sanic endpoints. Please skim through so that you are brought up to speed with the details of this post.

The main focus on this post is to learn how to mock a module that your test depends on. Mocking is a technique in unit testing. While the way mocking is done is the same across different tools and languages, there are some differences in methods and syntax used.

I will be using Python and PyTest to illustrate how this is done.

Here’s a scenario I have been presented with:

  • I have a Sanic Web Service that accepts POST requests passing in a file path
  • The web service will parse the path provided and instantiate an object that will return a list of files in the path. In doing so, it will use Boto3 to make calls to AWS infrastructure.
  • The web service will return data to the client via JSON object {‘files’: []} with a list of file names of files found inside the array.

The question is this, how can I test the functionality of this endpoint without having the object doing the parsing inside the web service making actual calls to AWS?

Why is making actual calls to AWS a bad idea?

Before we continue answering our question, I want to do a quick intermission to answer a big question you might have. Why not just let it call AWS and return results? How do you know that things work if you don’t actually make the calls?

Great question! The problem is that while AWS is a very reputable company for cloud services, there is no guarantee that they will be 100% available at all times. There is still inherent risk that the infrastructure may go down even if it is just 0.000001%. Another issue is making calls over the Internet is expensive. What I mean by this is it consumes your network, and the most valuable resource, your time. Allow me to exaggerate a bit to demonstrate my point; If you have 10000 test cases written for your application and each test depends on an AWS call, your tests have just consumed 10,000 calls using data and your/your company’s network to make those calls. Some services such as AWS API Gateway charge per request or have an allotted number of requests available to you. Now picture having multiple applications each with 10,000 tests that makes calls to AWS. This gets expensive really fast! Having tests run locally and immediately inform you if your application is behaving properly is paramount. A test that would’ve taken 500ms or even 1 second can now be done in just 6ms or less! That’s at least 83x faster!

Great! So, what is mocking?

I like to use the analogy of a parrot when explaining mocking. Just like a parrot mimicking human speech, mocking in unit testing is mimicking the response that you would expect from a module or a function.

We are swapping out the calls to AWS with the expected response that we would get from AWS depending on the scenario we are trying to test. This is great because we can focus on testing the implementation of our logic rather than dependencies such as network connectivity or the availability of AWS services.

How do we do it?

In your test file

  1. Import your Sanic application
  2. Import your module
  3. Create a class that will represent your response object
  4. Write a method method inside the class that will return the response. Declare it as a static method.
  5. Write your test function, accepting monkeypatch as a parameter
  6. Create a sub-function under your test function that will create an instance of your response class and invoke the response method.
  7. Apply monkeypatching to the module you want to mock, specifying the method to mock and pass the name of your sub-function that will return the response.
  8. Invoke Sanic app’s test_client and make a POST request to the desired endpoint.

Ok, that was a mouthful. Let’s see it in action.

from myApp import app
from myModule import Module
import json

# This class captures the mocked responses of 'Module'
class ModuleResponse:
  def filesFound(*arg, **kwargs):
    return ['folder1/file1.pdf', 'folder1/file2.pdf']

  def noFilesFound(*arg, **kwargs):
    return []

def test_return_results(monkeypatch):
  async def mock_response(*args, **kwargs):
    return Module().filesFound()

  # 'searchFiles' is a method of Module that the endpoint will call
  # and also the method we want to mock. The 3rd argument passes
  # the function to return the mock results we want
  monkeypatch.setattr(Module, "searchFiles", mock_response)

  # Using Sanic app's test_client to make a post request to our endpoint
  response ='/search', False, json={'path': 'folder1'})

  result = json.loads(response.body)
  assert response.status == 200
  assert result == {'files': ['folder1/file1.pdf', 'folder/file2.pdf']}

# Providing an empty path should return [] with 200
def test_search_empty_path(monkeypatch):
  async def mock_response(*args, **kwargs):
    return Module().noFilesFound()

  monkeypatch.setattr(Module, "searchFiles", mock_response)

  response ='/search', False, json={'path': ''})
  result = json.loads(response.body)
  assert response.status == 200
  assert result == {'files': []}

class Module:
  def __init__(self, searchPath=''):
    self.searchPath = searchPath
    self.files = []
    self.client = boto3.client(

  async def getFiles(self):
    return await self.__lookupFile()

  async def __lookupFile():

# myApp 

@app.route("/search", methods=['POST', 'OPTIONS'])
async def do_post(request):
    myModule = Module(searchPath)
    results = await Module.searchFiles()
  except Exception as e:
    return json({'error': e}, status=500)

  myResponse = {
    'files': results
myResponse['files'] = results

  return json(myResponse)

As you can see, monkey patching is key! Monkey Patch replaces our implementation of Module’s searchFiles() method with our fake response.

I hope this guide has helped you gain more understanding with mocking and how to do it using PyTest.

If you think this has helped you in anyway, please help share this post and feel free to add me on Twitter @AlexLHWang.

Sanic Endpoint Testing using PyTest

Recently, I started working on my very first Python web framework at work called Sanic. While the framework is relatively easy to use, I could not say the same for unit testing as its documentation isn’t very clear to me.

If you’re having issues getting unit testing setup for this web framework, you have come to the right place.

Before We Begin

I want to put a disclaimer that I am using Sanic version 19.9.0 on Python 3.7.6 on MacOS 10.15.3; This implies that installation instructions will be biased towards Macs, but Windows or Linux installation of software should be similar with minor modifications.

What is Sanic?

Sanic is a python 3.6+ based web framework that allows you to create an HTTP server. Its strength lies in the use of Python’s new async and await syntax released in version 3.5 allowing non-blocking code for higher performance.

Things you need

  1. Python 3.6+ and pip3 installed on your machine
  2. Virtual Env
  3. PyTest

Setting up the environment

The workflow are as follows

  1. Install Python 3.6 or higher
  2. Install Virtual Env
  3. Install PyTest

Install Python 3.6+

The standard way of doing so would be downloading the latest version from However, if you use a Mac, you can install homebrew to get the job done.

At the time of writing, the latest version of Python is 3.8.1. To install using homebrew, execute brew install python3. Homebrew will automatically install the latest Python 3 available.

Install Virtual Env

Virtual Env allows you to lock down your dependencies to a specific version and is isolated from your main system. This ensures system packages do not conflict with your project’s packages.

  1. Execute pip3 install virtualenv in terminal.
  2. Create virtual environment virtualenv (ie: virtualenv myProject)
  3. Start the virtual environment source /bin/activate (ie: source myProject/bin/activate)
  4. Install dependencies using pip3 as usual.

To exit from virtual environment use deactivate command.

Install PyTest

  1. In the activated virtual environment, execute pip3 install pytest
  2. execute deactivate && source /bin/activate. This will refresh the virtual environment once the path for PyTest has been set.

Create the First Route of your Application


from sanic import Sanic
from sanic.response import text

app = Sanic('myApplication')

async def main_route(request):
  return text('Hello')

if __name__ == "__main__":"", port=1234)

Creating a Simple Test

The biggest issue I had was trying to understand importing my web service application into Sanic test.

I placed my test at the root directory of my project folder co-locating it with just for proof of concept. In the future, I intend to put them under a “test” folder.

PyTest first look at any arguments passed on the command line when executing pytest, then inside config files for testpaths attribute before looking at the current directory recursively for files named test__*.py and *

To create a simple test, I co-located my test next to

from myApp import app

def test_default_route():
request, response = app.test_client.get('/')
result = response.body
assert response.status == 200
assert result == 'Hello'

The method I used to import my web service application is via relative imports. When code gets imported into the test file, the code is automatically executed (similar to the way JavaScript behaves). This means an instance of the Sanic application is instantiated in memory. Thus, you can use it immediately to test the endpoint.

Sanic object instances contains a method called test_client which mimicks an HTTP client making calls to an endpoint using HTTP methods. In my example, I used the get method going to the main route. You can of course change the HTTP methods to match your needs.

I hope this has helped you get started in endpoint testing with Sanic. I will be adding subsequent blog posts as I learn more about this topic and learn from best practices.

How to remove Dell U2711 Ultrasharp monitor stand


Around 6 years ago, I bought my very first professional grade monitor. It had the best in class colour accuracy for 2012, an IPS panel; Something quite new at the time. It was revolutionary. The cost, was also revolutionary at a whooping $800 during a sale with the MSRP at over $1000. The monitor came in the box with the stand already attached to the monitor.

6 years later, I’ve decided its time to put it on a monitor arm that allows more adjustable settings for better ergonomics. I went online to search for the long forgotten manual via Google and was shocked to find that there were no instructions on how to remove the stand from the monitor! If you’re reading this right now, you’re probably in the same situation.

I gave Dell a call to see if their technicians would know. The first thing the technician said after telling him about my problem was “Ok sir, this is a five year old monitor. Let me go on Google to see if I could find the manual and find out how to do this.” I thought, perhaps the Dell technician has special magical powers in Googling that I didn’t know about. So I waited about 5 minutes for him to find information. He comes back and says “Sir, the manual does not seem to provide information for how to remove the stand from the monitor. Let me see if I can find from our database information for how to remove the stand from this monitor. Let me check our database.” I thought, “Cool, perhaps there maybe some secrets hidden in Dell’s very own database that will save the day”. Another 5 minutes go by and the technician returns, “Sir, I’m so sorry, but it seems that our database does not have information on this either”. Like any concerned consumer, I followed up with “Ok, so what are my next steps? What can I do about this?” He says “Well, you can try to ask a friend who has done this before how to do this.” Steam was pouring out of both sides of my ears at this point. If I had a friend that knew how to do this, would I be calling Dell technical support in the first place?! I told him as calm as I possibly can “Well, I don’t have such a friend”. His response was “Well sir, in this case, you can continue to try on your own because there’s nothing much we can do on our end.” I decided that hassling him would do us no good. So I thanked him and went on my way.

I followed the fellow’s advice and after some trial in error, I figured it out myself. I hope that this guide will be useful to you.

This experience above has left me with a sour taste in my mouth. I had hoped that a big company like Dell would have systems in place to handle situations like this. Perhaps an escalation of the issue from the representative to an actual engineer who may know how to solve this problem or giving proper training to employees about the general designs of the monitors and common questions that may come up.

What I did not include in the transcript was the fact that the representative admitted that I was on the line with the technical hardware team. Yet, the irony of all of this was that the “technical hardware team” did not have the answers to the company’s very own products, instead, relying on Google to find its own product’s manuals; It reflects that the management team is not providing its people with the tools and training to succeed.

This leaves a lot more to be desired. I sincerely hope that this company would work on a strategy to turn this around.

Items you need

* T20 Panhead screw driver

Steps to remove the stand

1. Prepare a soft smooth surface for the monitor’s screen to rest upon.
2. Place the monitor flat on the smooth surface prepared in step 1 with the screen facing the surface.
3. Take the T20 screw driver and remove the four panhead screws that is attaching the stand to the monitor.

That’s it! Pictures are included in the appendix section below to make things more clear. I hope this has helped make your day better.


T20 Panhead Screw driver:

4 Panhead screws holding the monitor stand

Panhead screw front shot

The monitor stand should be facing down during removal as shown below:

Top two mounting holes on the monitor stand highlighted by red circles. There are two additional holes at the bottom that is hidden from view:

[PR] Debugger.html: Add a test for pause on next (Mochitest)

Issue#5446, Pull Request#6058.

This is a pull request blog post. Please see my blog post on Mozilla devtools-html/debugger.html for more information about the project.

What’s the bug about?

Project maintainer @JasonLaster would like to add a mochitest that tests clicking the pause button on the debugger results in the debugger pausing on the next execution.

Chosen growth

The goal of this release was to practice writing unit tests for an opensource project. Unit testing has been a primary concern for me since I’ve had exposure to it about a year ago. I value quality software and understand that it is significant in both private companies and open source communities. Therefore, it is something that I would like to make second nature to me when I program. This was the perfect opportunity.

Adding the test

The first step to fixing a bug for any opensource software is to fork the repository. Then, clone the forked repository onto the local machine and create a new branch:

git checkout -b issue-5446

The second part is learning more about mochitests. In short, mochitests are unit tests that use the MochiKit platform (an opensource project that Mozilla uses to test the Firefox browser). Using mochitests, developers can programmatically simulate actions that users will perform on the Firefox debugger.

Following the instructions on the mochitests documentation, I’ve setup mercurial (a git like version control system) and autoconf213, a script that produces shell scripts that configures source code packages.

Next up is downloading a copy of Firefox into the repository and configuring links between debugger.html/ directory and Firefox’s test directory. Luckily this can all be done using a simple command ./bin/prepare-mochitests-dev

Once the apparatus was setup, it was time to begin researching about how to write mochitests. @JasonLaster has given me some hints by pointing me to a specific test to look for. He suggested that I look at browser_dbg-breaking.js inside src/test/mochitest directory.

@JasonLaster has outlined the steps the test needs to perform in the issue. The steps were as follows:

1. Load a page
2. click the pause button on the debugger
3. Eval a function
4. Assert that we are still paused

The first problem I encountered as that the pause button did not exist.

This was resolved by changing the debugger’s settings:
1. In the debugger’s console execute: dbg.features
2. set “remove-command-bar-options” to false

The second part was understanding how to write the tests. The most difficult parts being clicking the pause button on the debugger and evaluating a function. After getting some clarification, I proceeded to experiment with the API to find out the relevant functions to use. The API for the tests were located in src/test/mochitest/head.js.

Based on intuition, I first attempted with pressKey(dbg, "pauseKey"). This simulated pressing on the pause key on the keyboard. However, this did not work. There was also no obvious hints I can grasp from the API at that point, so I turned to using git grep to help me find the use of the word pause in other tests under src/test directory to get more hints. Professor Humphrey has taught me to use git grep -C numberOfLines searchPhrase path as a means of being able to see the number of lines surrounding the search phrase.

While experimenting with the API, something really odd had occurred. The symbolic linking had suddenly been undone, which lead to the harness completely break. Unfortunately, this only occurred one time and so I was unable to reproduce the problem or find the root cause of it.

I tried deleting the entire directory, re-cloning the repository and initiating the configuration process. However, during the configuration process, there were error messages showing that I did not understand. After googling around, I was unable to find answers as the problem is specific to this project. I reached out on slack to get assistance.

@JasonLaster mentioned in the chat to debug issues with the harness, go into the firefox/ directory inside the project folder and run ./mach mochitest --headless devtools/client/debugger/new.

After following those instructions, I was able to diagnose that my computer was missing the Rust compiler. To fix that issue, I ran ./mach boostrap to try to get the compiler installed, but ran into another error. I was suggested to run ./mach configre first.

After bring presented with four flavors of firefox to select from, I was instructed to build using Firefox for Desktop Artifact Mode. From there on, I re-ran ./bin/prepare-mochitests-dev and after 5 minutes later, I was back in business.

Not wanting to waste anymore time, I turned to looking at test files individually based on the relevance of their names to gain additional insight. I saw some UI tests using the function clickElement(). I looked inside head.js to look for the definition of clickElement. I’ve found that it takes the debugger instance as the first argument, but then a selector as the second argument. The list of selectors were defined as a dictionary. Scanning the list of selectors, I did not see one for pause even though one for resume existed. The value for the resume was “”. I began taking guesses that this may have to do with the user interface.

I initiated an instance of the debugger to confirm my hypothesis. Indeed, after using the inspector on the DOM, I was able to see that “resume” and “active” were both CSS class of the resume button. I then inspected the pause button to see what that said and it turns out to be “”.

Executing the test with await waitForTime(15000) gave me a 15 second window to play around with the pause button under test mode to see outputs that were being logged to the console. In particular, when I clicked on the pause button, “1 BREAK_ON_NEXT” appeared. This was a hint for a command that I should listen for to ensure that the pause button was clicked!

I went back to head.js and added the property “pause” and value “” to the selectors dictionary.Then, inside my test, I added await waitForDispatch(dbg, "BREAK_ON_NEXT").

However, once I’ve applied this, I began getting errors about cyclical references. It took a while for me to understand what was happening. I got additional help from @bomsy who confirmed that I was on the right track with defining a selector for paused to simulate clicking the pause button.

After much digging, I realized the problem came from my improper use of invokeInTab(). I had passed the debugger into the function which lead to self-invocation to infinitude.

The next step was to figure out how to eval a function. According to @JasonLaster, I should be able to use invokeInTab() to simulate executing a function from console. However, this did not seem to work. The test repeatedly timed out as it waited for the next action to execute after the pause button was clicked.

Since I was unable to trigger this myself via the console, my temporary work around is to execute the function as an expression in the watch window, which is considered an execution to trigger pausing the debugger.

I was able to find an example of this under src/test/mochitest/browser_debugger-expressions.js.

In addition, I’ve added an extra line of code to tell the debugger to wait for the debugger to pause to give it time for the pause effect to kick in; All of the test code is executed asynchronously and therefore required await if it needs to wait for the command to resolve. Otherwise, race conditions would occur and the test may fail due to code execution not yet completed.

I have since submitted a pull request and currently waiting for @JasonLaster to review the pull request. Additional updates will be made to this post at a later date once the review has been completed.

What I learned

While writing a test seemed very simple, a lot of the time was spent setting up the test harness and understanding the available API. While the Firefox Debugger team had documentation on mochitests, there were some areas that were unclear. In particular, there were no instructions on how to troubleshoot issues when Firefox does not compile. Inside head.js, some functions were not documented and required inference and guesses on what those functions do.

I felt that I’ve done better this time around having reached out to the community for help a lot sooner than my first pull request. I’ve practiced using git grep to help me find specific keywords that I am interested in while searching for example usages of the test harness API that I was learning. Furthermore, I’ve expanded this concept to looking at test files that other developers have written to understand the usage of other commands, which I may not have been aware of initially.

What surprised me the most is how the test harness can suddenly break and the importance of documentation and community to help opensource developers get through the hurdles of having the environment steady enough for them to tackle the real issue at hand. I saw first handed how big opensource projects can get and that work is never complete, just more work to be done.

[PR] Brave browser: Fix URL issues with Brave Browser using Test Driven Development

Issue#13897, Pull Request#13898.

What’s the bug about?

This week in class, my partners @irrationalRock, @Woody88 and I worked on a bug in the Brave browser. The bug was about the way the address bar parsed urls. Our goal was to first write the tests for for the bug, then create fixes that address the issue.

We looked at the following scenarios:

General strings
• “dog”
• ” dog ”
• “dog cat”
• ” dog cat ”

URLs with query string
• “”
• ”
• “ cat” (failed to properly set search query string)
• ” cat ” (Failed to properly set search query string)

File paths
• “/path/to/file/dog cat.txt” (failed to render)
• ” /path/to/file/dog cat.txt ” (failed to render)
• ” C:\Path\to\file with space.txt” (failed to render)
• ” C:\Path\to\file with space.txt ” (failed to render)
• “file:////path/to/file/dog cat.txt”
• ” file:////path/to/file/dog cat.txt ”

We’ve come to the conclusion that whenever there is a space in a URL’s query string or a file path (Windows or Unix), the browser would take the entire string and perform a google search.

After examining the urlutil.js, we began writing a few tests inside test/unit/lib/urlutilTest.js. The test does the following:

• Ensure prependScheme() is able to prepend the file scheme in front of a Unix file path

• Ensure prependScheme() is able to prepend the file scheme in front of a Windows file path

• Ensure URL encoding works for space characters in absolute paths

• Ensure URL encoding works for space characters in URLs with query string

We made sure the test would fail after writing the test first then went to hunt for the bug.

Looking for the bug

After cloning the repository and installing the node packages via npm install.

We started the debugging process by executing npm run watch followed by enabling the debugger in VS Code. Looking inside js/lib/urlutil.js, we examined the way isNotURL() parsed a string to determine if it is a URL or not.

This is important as the software’s workflow in the following order:

1. Determine if it has a valid scheme (i.e. Http://, Https://, file://)
2. Determine if it can potentially be URL
3. Trim the input.
4. Prepend the scheme if it is not already provided.
5. Check if the URL can be parsed and send it to the view.

@irrationalRock noticed the issue lies in the regular expressions used in isNotURL().

const case2Reg = /(^\?)|(\?.+\s)|(^\.)|(^[^.+]*[^/]*\.$)/

The intent of the second regular expression was to filter out inputs that match the following cases:

  • starts with “?” or “.”
  • contains “? “
  • ends with “.” (and was not preceded by a domain or /)

However, in doing so, the case (\?.+\s) ended up filtering URIs with legitimate query strings. The “.+” of the regular expression represents one or more non-newline characters; The original intent however, was to filter out the case where the input contains “? “.

The solution was to change this regular expression to the following:

const case2Reg = /(^\?)|(\?\s+)|(^\.)|(^[^.+]*[^/]*\.$)/

By modifying that particular part of the regular expression to (\?\s+) we corrected the problem by specifying the filter to only engage if it detects a “?” character followed by one or more spaces.

This resolved the issue with a space in the query string. While we originally explicitly converted the space character into %20 to encode the URL, Professor Humphrey has noted that the URL encoding may occur implicitly when it gets sent back out to the browser. We did some extra investigation, and indeed, once the input string reaches getUrlFromInput(), return new window.URL(input).href on line 160 of urlutil.js will automatically URL encode the space character.

This resolved our very first bug.

Resolving Unix absolute file path with space in file name issue

The second problem was Brave interpreting UNIX absolute file path with a space in the file name as a string for google search. While tracing the way Brave parses an input string of such case, I’ve noticed that the regular expression in isNotUrl() has once again over aggressively classified it as a non-url before the file scheme can be appended to the input string.

The problem lies in line 134 of urlutil.js

if (case2Reg.test(str) || !case3Reg.test(str) ||
(scheme === undefined && /\s/g.test(str))) {
return true

In the case of an absolute file path with a space in the file name, scheme === undefined && /\s/g.test(str) will classify it as not a url.

I first consulted with Google Chrome and Firefox to see how these two popular browsers handled UNIX absolute file paths with space in the file names and what occurs when bogus paths were provided. This was done by directly testing inputs in these browsers as well as reading the opensource code for both projects.

It turns out that both browsers assume that if a string begins with a forward slash, it is a UNIX absolute file path regardless the path actually exists or not.

My solution then was to check if the input string begins with a “/” character; The regular expression: /^\/ resolved the issue.

Resolving Windows file path with name in file path issue

The last issue to tackle was same issue as the UNIX absolute file path issue, except this time on the Windows platform. The difficulty lies in the fact that Windows uses drive letters and backward slashes.

Due to the similarity with this issue and the UNIX absolute file path with space in file name issue, my proposed solution was to add on top of what I’ve already written. I refactored the code I’ve written previously and instead created a new regular expression case.

const case5Reg = /(?:^\/)|(?:^[a-zA-Z]:\\)/

The “?:” symbol along with ( ) to contain the regular expression represents a grouped regular expression without tracking. In addition the “|” symbol represents an OR statement followed by regular expression that will detect any windows drive letter.

This allows a Windows file path to be treated as a URL. However, doing this alone is not enough. I went to research on how a windows file path is represented using the file scheme.

Once I’ve understood how a URI should look for a file scheme with windows path, I’ve added a windows file scheme constant that detects windows drive letters in a case insensitive manner. Inside prependScheme(), I’ve added additional logic to check for Windows file path.

Once a Windows file path is detected, it will use regular expression to replace backward slashes with a single forward slash.

if (windowsFileScheme.test(input)) {
      input = input.replace(/\\/g, '/')
      input = `${fileScheme}/${input}`

After rerunning all the tests using npm run test -- --grep="urlutil", I was able to verify that all test cases (including the test cases I’ve written) passed. Just to be safe, I launched Brave to test each fix and indeed all of the issues mentioned have been fixed.


The test driven development style of programming is new for me and took a bit of time getting used to. It is difficult to visualize what a fix would look like before I have the code already written. However, it pushed me to read the underlying code and explore how Brave operates, the functions traversed while it parses the URI string that the user enters into the address bar, which gave me an idea for how these tests should look like.

Furthermore, I’ve learned to read code that other developers have written for other opensource projects. As a result, I saw new ways of writing code that I have never seen before. This gave me the opportunity to understand how to write better and concise code.

Debugging bundled and transpiled code using VS Code Debugger


Debugging a web application built using WebPack

In a recent project that I’ve been working on assigned by my professor, he showed us really cool developer tools. In particular, one of them was called WebPack. The idea is to have a “compiler” that packs an entire web application into a single JavaScript file, which can be executed by browsers. This is important if one is running code that uses transpilers such as babel.

While this new tool is truly innovative and offers great advancements in software development workflow, the question begs “How do I debug these apps?”

The Problem: Web apps that have files packed needs to be compiled first

Based on my limited knowledge of Node.JS, the straight forward approach of debugging any Node.JS app is to use a powerful editor such as VSCode and add the configuration for launching the program by targeting index.js or app.js.

A problem you will quickly encounter is if the application uses babel and uses JavaScript features that are not yet implemented in Node.JS, VSCode will throw an error. For example:

Unexpected token imports

If the code is packed with Webpack using all JavaScript features supported by Node.JS, then the app just wouldn’t start with no error messages given.

And if that didn’t work, the next option would be to use the Launch via NPM method.

In order to debug apps like this, a source map is required to allow the debugger to reference where in the individual source files, that single line is referring to. This is  generated by WebPack (with a special option enabled) when the packing takes place.

Furthermore, advanced JavaScript features that are not yet available on the current version of Node.JS also needs to be transpiled first, which occurs when the code is assembled. Usually, this involves having the web application already running (i.e. via webpack-dev-server).

Attempt using Launch via NPM method resulted in the following:

Solution: VS Code Google Chrome Debugger plugin

After consulting with Professor Humphrey, he pointed me to VSCode Chrome Debugger plugin. The idea is start the app in the terminal as usual, then have VSCode use an instance of Google Chrome as a proxy to run the app.

Once the code hits a break-point, Google Chrome will report back the current state of the app and the results will be displayed on VS Code as if you’ve never left VS Code at all. It truly is magic.

Step 1: Set your desired breakpoints

I like setting the breakpoints ahead of time, but doing this step last can’t hurt either.

Step 2: Configure VS Code Debugger

Click on the Debugger icon and in the drop down menu next to the word DEBUG, select Add Configuration, then select Chrome: Launch.

Check to ensure that the port in the url section matches with your app’s configurations.

Step 3: Launch the app in terminal

Bring up your terminal and launch your web application. Typically, this maybe

yarn start
yarn debug
npm start
npm run debug

Step 4: Launch Google Chrome from VSCode Debugger

From VSCode’s debugger page, press the green arrow button to launch chrome.

Once the code hits the break-point you’ve specified, VSCode will activate and pause on that line of code for you to inspect! Happy debugging.

Other solutions

I wrote about this method because it was the easiest method for me to learn. There are also other methods to debug web apps like such outline by Microsoft here.

[PR] Debugger.html: Quick open style selected matches

Issue#5679, Pull Request#5750.

This is a pull request blog post. Please see my blog post on Mozilla devtools-html/debugger.html for more information about the project.

What’s the bug about?

The current implementation of “quick open”, a command pallet like feature in the debugger tool uses bolded black text to highlight words that fuzzy match letters the user has typed into the quick open search box. However, when an item is selected, the current visual aesthetic of the item’s text does not conform to Mozilla products such as Firefox where the text on a selected item appears white and fuzzy matched letters as bolded white text.

At the direction of @violasong, a Mozilla UI designer, the goal is to make the fuzzy matched letters appear bolded white.

Fixing the bug

The first step to fixing a bug for any opensource software is to fork the repository. Then,  clone the forked repository onto the local machine and create a new branch:

git checkout -b issue-5679

The second part is being able to work through all of its dependencies and have the application compile and run. The repository has very extensive information regarding the setup of the application. Once setup was complete, go ahead and explore the application as a first time user.

In order to fix a problem, one must also understand the problem. This means knowing how to reproduce the scenario for which the selected item in the quick open produces the black bolded text. Being able to trigger the quick open panel also means having a method to trace the location of the source code file of interest. Due to the similarities between the debugger tool and Firefox’s native debugger (since this debugger is actually used in Firefox as well), I got confused thinking that I’m supposed to open my native Firefox debugger to open quick open.

Luckily, the Mozilla team had a slack channel. @anshulmalik, a team member of the repository, asked for steps to reproduce to figure out what steps I was missing to figure out the issue.

I realized that in order to get the command pallet to load as expected, I needed to be inside the debugger app browser window. When the app launched initially, it had a button called “Launch Firefox”. The browser window also looks different (it has an orange background for the address bar) so I thought that was the debugger. It turns out, that was the debugger’s browser to allow browsing of projects. The browser window which I pointed to localhost:8000 was the actual debugger. I needed to select projects on the debugger window in order begin debugging the project.

From there on, using Firefox’s native html inspector, I was able to pinpoint the html code associated with the command pallet.

There were two things that stuck out at me when I inspected the HTML element. A <div> with the classes selected and result-item which is associated with the highlighted item. Under that node is a title node followed by nested div node which contains a mark element. The mark element has a class named highlight. The mark HTML element contains the letter “t”, which happens to be the same letter I typed into the search box.

The next step was to find the name of the source file that controls this rendering. I went straight for the issue tracker and started searching for closed issues, looking for UI related issues and issues with regards to searching. During my search, I’ve found a particular issue related to the UI component I was working on. I dug deeper into the related pull request. The pull requests contains a files changed tab, which will give me clues as to the files that were modified by the contributor. A really interesting file stood out at first glance. It was named QuickOpenModal.js.This file was located inside src/components directory. When I located this directory inside the project folder, I saw QuickOpenModal.js. Bingo! Light bulbs went off in my head. At the same time, I realized that what I had been calling command pallet was actually called Quick Open.

After some experimentation, I was able to create the fix.

.selected .highlight {
  color: white;

Once the updates have been pushed to the forked repository, I proceeded back to the repository and immediately I was greeted with the option to perform a pull request.

Clicking on the Compare & Pull request button presented me with a preformatted form to fill out which states the issue number, what I’ve fixed, and a test plan which entails a bullet-pointed list of how the maintainer can test to see if the fix was successful. In the request form, I’ve also included a screenshot to show the final result:

From there on, everything was history!

What I learned

I think the most important thing to mention is that opensource was not as scary as I had initially thought. I want to thank @jasonlaster for assigning me the bug and @anshulmalik for helping me through the issues I’ve encountered. I also want to thank professor Humphrey for encouraging me to choose a bug and introducing me to the Mozilla team.

It was a valuable experience learning the steps needed to contribute to any community; It begins with doing research about the project and understanding how the community works. This means reading the documentation made available and trying out the project. While there’s a tendency to think software developers work in isolated cubicles, working with opensource opened the doors to learning how to work with people across different cultures and geographic areas. Once the socializing settles, it is time to search for issues to tackle, then target the related source file and providing a fix.

I’ve also found GitHub pull request and issue templates to be interesting. Just by observing how other projects are managed allowed me to learn tremendously about opensource. Furthermore, knowing that GitHub has specific keywords that can be used as pull request titles, descriptions, and even commit messages to automatically close issues was an eye opener. This allows anyone to streamline managing issues, making the opensource experience even better.

Mozilla is also a major user of Eslint and Prettier. In fact, their Prettier configurations checks all files, even CSS files! That was shocking for me.

Issue titles are very important. What I thought was Command Pallet was actually Quick Open. Had I taken more notice of my issue title, I would realize that I could use that as the search word when searching through closed issues to find relevant closed issues sooner.

Steps to reproduce was, is, and will be a very important thing when tackling any bug. Being able to solve a problem means the individual needs to be able to encounter the problem first. This is true when listing an issue for others to resolve for one’s own projects or when describing a problem to a team member or mentor of a project you’re contributing to.

Finally, I’ve learned to ask for help. Having a great team such as Mozilla with mentors willing to extend their help gave me confidence to tackle tough bugs in the future. Asking for help means being able to pin point areas of interest faster. Furthermore, the folks with more experience can provide you with resources that you may not know enough about to google, which can help advance your knowledge when tackling a difficult problem.

Mozilla devtools-html/debugger.html

debugger.html is a Mozilla project aimed and creating a hackable JavaScript debugger that is easy to use, while providing developers with a powerful and testable tool for the web. It was built using React and Redux. Mozilla created this tool mainly for the use in Firefox Developer Tools, however, it also works with Chrome and Node.JS. The team also invites anyone to enhance the tool to work with other environments and browsers.

Where’s the code?

The location of source code cannot be anymore obvious, it’s under src directory just under the project root folder.

Where’s the documentation?

Documentation exists under docs directory right under the project root folder. In addition to traditional documentation regarding usage of the project and project policies, Mozilla has also included a page documenting weekly updates.

How can you get involved?

One great thing about Mozilla is its strong friendly community. A great way to get involved is to join their slack team. You can get a feel for the environment of the community by idling on the channel and observing conversations. You may also notice that veteran contributors will welcome you to the team and encourage you to select a bug to work on. These folks will also extend help when you need an extra boost while tackling bugs.

In addition to joining their slack channel, the repository also includes a file which contains all the details about how to contribute to the project. Common ways of contributing include reporting bugs, suggesting enhancements, writing documentation, giving talks, and organizing meetups.

Since Mozilla has a large community base and has many years of experience in opensource, the team also includes participation guidelines that they expect all members of the community to follow. This is to ensure a mutually respectful and open environment.

Finally, Mozilla has a bug bounty program. If a vulnerability is found, the community suggests reading policies regarding reporting vulnerabilities. In addition, they  recommend creating an issue on Bugzilla (Mozilla’s internal bug tracker) for security issues.

What are some interesting things you’ve learned while observing the project?

Being a first timer at an official opensource project, I’ve found the way the team has structured the issues and pull requests.


Issues generally have a screenshot (where possible). This is then followed by a short 1-2 sentence description of the issue along with examples of the problem and the expected result. Some issues will include steps to reproduce. The community has placed strong emphasis on steps to reproduce as it will help new contributors to reproduce the problem in order to tackle the bug. The issue tracker also includes a host of descriptive tags such as: in progress, not-available, difficulty:medium, enhancement, performance, bug, tracking, code health, and good first issue. In particular, “in progress” is the most useful tag as a contributor can see at a glance, which issues to skip when searching for issues to work on.

Screen Shot 2018-03-25 at 1.40.39 AM.png

The issue tracker also has a claim bot, which manages who is responsible for an issue. Contributors can reply to the issue with /claim to have the claim bot set the issue to “in progress” mode and signals to the community that he/she is currently working on the issue.

Pull requests

It’s interesting to note that the project contains a .github folder. Inside that folder, you will find files that contain templates for issues and pull requests. This is something I have not seen before, but can quickly grasp the importance of having such files for projects involving multiple contributors.

The pull requests contain the following information:
• The statement “Fixes Issue #(issue number)”
• Summary of Changes
• Test Plan (Step-by-step instructions for the reviewer to check that the fix is working)
• Screenshots of the fix (optional)

One interesting thing I’ve learned just this past week from Professor Humphrey is that GitHub has a set of keywords that can be used in a pull request title or description to allow the associated issue to close automatically upon merging. It appears Mozilla is taking full advantage of this by enforcing the “Fixes Issue…” statement in their pull requests.

Bridge-Troll project: Implement dark mode

This week, I was tasked to implement dark mode for the bridge-troll project.

Preparing for the fix

To ensure I can get updates from the original bridge-troll repo, I’ve setup the upstream remote for the repository by using the following command:

git remote add upstream

Trying out the app before working on it

One important thing is to ensure the app works as is and learning how it works before attempting to make changes to the app. I did a quick build to see how the app works. According to its, I should be able to run debug to bypass GPS based location only, allowing me to navigate the game through double clicking the mouse.

npm run debug

However, I soon found out that the debug mode was not working. I began examining the code files, making educated guesses based on the name of the files to see where I could save the problem. I came across a file named geo.js. In this file, there was a line that enabled the debug mode. It appears the parcel.js module was having issues with parsing flags.

I bypassed the check by commenting out the if statement, enabling navigation using double mouse clicks.

The next step is to figure out where the code affecting the maps lived. Luckily, the naming convention for this project was easy to understand. Navigating to maps.js revealed the code used to generate the map.

Choosing the proper theme and icons

This application uses a node module called Leaflet JS. It is an open source JavaScript library for creating mobile friendly maps. This library contains a series of themes also known as leaflet providers.

After browsing through the series of themes, I have settled on the World Street Map theme for dark mode. The choice was chosen due to its red and yellowish tint, which reduces the amount of blue light coming from the screen at night, making the app more usable without reducing the visibility of the streets and their names.

As for the icons, the default sets were kept because they were suitable for the purpose of identifying bridges that have already been visited versus bridge that have not yet been visited. Since the roads were in a pale red color, the black locks that came default was a good choice to create contrast on the screen.

Determining sunrise and sunset

To determine the time for sunrise and sunset, I used an node module called SunCalc to detect positions of the sun relative to the time of the user’s location. This information is used to calculate the time for the end of sunrise and when sunset begins to allow automatic switching between dark and light mode.

Exporting the code to a separate module

Once our initial implementation works, our goal now is to separate the code written into its separate module for re-usability.

Creating the module and factoring out dark mode code

I began by creating a separate js file for our new module. I called the module ‘displaymode’.

At the beginning, I had trouble figuring out what parts of the code inside map.js’ init() method that could be factored out for the dark mode feature. After consulting Professor Humphrey, I realized that the new module can have its own init() method, have logic that will figure out the time of day based on given latitude and longitude arguments and expose a getMode() method to allow map.js to obtain the proper theme to render.

Exposing a setter method in DisplayMode module

In order to facilitate re-usability in other parts of the app for the future, I’ve included a setter method setMode() in the DisplayMode module, which allows the user to pass the string ‘dark’ or ‘light’ to manually set the theme of the UI.

Preparing to ship new code

Once all the code has been written, I proceeded to run

npm run eslint

Screen Shot 2018-03-18 at 3.52.03 PM.png

The results show 2 warnings and 1 error. I realized when I moved the code over from map.js into displaymode.js, I had forgotten to remove the variable that requires SunCalc node module.

I then proceeded to investigate the warnings regarding console statements in csv2json.js. However, my finding is that those 2 lines of code are related to error handling and printing results to the console. Therefore, while it is not recommended practice as outlined here, I left it alone.

The next thing I did was run npm run prettier. The purpose is to ensure the format for which my code was written conforms to the bridge-troll repository.

Creating the commit

Once linting and prettying were performed, I added all of the files using git add command and performed a git commit to upload my commit.

Pulling the latest changes

To ensure my code base is normalized against upstream, I quickly performed git pull upstream master command. I then got warnings regarding conflicts with merging with package-lock.json.

It appears that the original repository had been updated to use WebPack instead of parcel.js.

To resolve this issue, I forcefully removed package-lock.js using rm -rf package-lock.json. Then I proceeded to execute npm i to allow package-lock.json to regenerate itself; In addition, install WebPack
I then executed npm run debug to test the application to see if the new added code is working with the latest files pulled from upstream repository. Indeed the application loads.

To ensure night mode works, I purposefully reversed the if statement in displaymode.js to ensure the dark mode theme appears as expected.

Commit and Push the changes

Now that everything is working, I added the files and create new commit and detailed the changes I’ve made for implementing dark mode. The commit was then pushed to my own forked repository.

Creating pull request

Heading over to my own bridge-troll repository on Github, I now see a sign that says “create pull request”. I was presented with a form to fill in once I clicked the create pull request button.

I provided details regarding the issue number that I was working on and the details of the things I’ve added in the pull request.


One thing I forgot to do after pushing all of my commits was that I had been creating commits against my forked version’s master branch. This is very bad practice and something that I will remember to never do in the future.

I realized this when I was unable to see the create pull request link on my bridge-troll repository. To work around this issue, I created a separate branch using git checkout -b issue-6. The name issue-6 is a simple, yet descriptive name for the problem I was addressing.


Overall, I found this exercise to be very fun and practical way of learning the process of fixing a bug and creating a pull request. While I recall all the steps mentioned during the lesson, it was obvious that I did not recall this during the actual fix. With this exercise, I now know which areas I should pay more attention to for my future fixes.

Learning and working with ECMAScript Standardization Tests

Recently, I’ve learned about browser wars and how standardization for the web came to be. The primary focus is on ECMAScript, another name for JavaScript. If you’re interested in the history of JavaScript and how it got re-branded as ECMAScript, find out more about it here. The summary is that while Brendan Eich had created a compelling language for the web, his implementation rests with the Netscape browser, which at the time was closed source. Other browsers such as Microsoft’s Internet Explorer, desiring the same capabilities reverse engineered Brendan’s work and created their own implementation, calling it JScript to avoid trademark disputes. This means the behavior of JavaScript among browsers was not consistent. Web designers often resorted to multiple hacks to ensure cross browser compatibility. ECMA is an European organization that standardizes technology to which Brendan approached to standardize JavaScript to avoid further fragmentation and to prevent Microsoft from setting their own standards. Since the standardization of the language, software vendors and internet citizens like you and I are now able to contribute to the discussion regarding standards that should be enforced in all browsers. In order for browser vendors to ensure their browser conforms to the standard’s specifications, automated testing was created. It is open to the public such that they can also contribute by providing suggestions and feedback regarding the standard, or help write tests which browser vendors will check their implementations against. I choose to practice writing some tests for the browser vendors.

Obtaining the tests

ECMAScript has a GitHub repository, which keeps track of all tests written for ECMAScript and manages collaboration between all people working standardization. I began there.

Step 1: Read

Often, I get very excited about working on an open source project that I completely skip hoping to jump right into the heart of the project. The on the tc39/test262 repository is very informative and is a great place to begin reading.

In particular, the runners & harnesses section is the most important. A harness in their terms means a command line software that will execute the tests written for the standard on the vendor’s browsers.

The version of harness I choose was Node.JS since I have the most experience working with it as of late.

Step 2: Install the harness

Following the commands they’ve suggested, I’ve executed the following on the terminal:

git clone --depth 1
cd test262
npm install -g test262-harness

Take note, the --depth 1 part of the git command is an option that specifies obtaining the latest commit only. This is useful if our purpose is to obtain the latest version of the test for conformance testing purposes.

Step 3: Execute tests

It is always a good idea to run the unit tests of a project before beginning to work on it. This way, we have an idea if there are any existing errors before introducing code of our own.

test262-harness test/**/*.js

Test results

Using Node.JS version 9.6.0, the 205 tests completed with 199 passed and 6 failed.

The problems for which Google’s V8 engine failed at seem to be related to localization and the handling of the property descriptor in its tests of the harness itself.

Background information: Node.JS is a JavaScript run-time environment built based on Google’s V8 JavaScript engine built for back end processes such as hosting a website or web service.

Step 4: Read

Now that the harness is setup and tested to work, we begin by reading on the repository.

After reading this document, I feel I’ve understood more about
• Tests naming conventions
• Tester file format and commenting
• How to write tests that runs on the harness

Step 5: Read the Specification

Before writing any tests, we must understand the specifications of our point of interest. The goal is to understand the implementation of Array.prototype.reverse().

This process took me a while as I had to dig deep to understand the intent of the pseudo code as well as the behavior of the abstract functions it referenced.

Step 6: Write some tests for Array.prototype.reverse()

Now that we’ve read, it’s time to practice writing some tests. I’ve created array-with-primitives.js as the name of my test file. My goal was to test the following cases:

• The reversal of [] is []
• A collection of numbers is reversed correctly: [1,2] > [2,1]
• A collection of characters is reversed correctly: [‘a’,’b’] > [‘b’, ‘a’]
• A collection with mixed primitive data types should be reversed correctly: [‘a’, 1, undefined, null] > [null, undefined, 1, ‘a’]

As I experimented with writing these tests, I ran into a few problems

1) Testing the case of reversing []

The test I wrote initially was

let a = [];
let reverse = a.reverse();
a,reverse, `#1: a = []; a.reverse(); reverse.length === 0. Actual: ${reverse}`

The specification noted that an object is returned and therefore I assumed a and reverse were two seperate objects to which I could use assert.sameValue() to compare.

What I’ve found over time was that this is not the case. In fact, a also gets modified even as the object is returned and referenced by reverse. After reading the specification in more detail, I realized this behavior is intended. I was able to test this by using the Node environment, then performing

let a = [];
reverse = a.reverse();

The result of this was that both a and reverse had [1] as the result. This is when I realized they are references and cannot be used to check and see if the array is truly empty. I then thought of comparing it by using

reverse,[], `#1: a = []; a.reverse(); reverse.length === 0. Actual: ${reverse}`

This did not work because assert.sameValue() is checking the memory address for which the two variables were pointing to. The solution I came up with to guarantee that it is indeed what I expected was to use the length property of reverse to ensure the length is 0.

2) Checking values inside an array with length > 0

This problem is related to the previous one. My assumption was that I could check the contents of an array and make comparisons by passing in a separate array with the expected result.

a = [1, 2];
reverse = a.reverse();
a, reverse, `#2: a = [1, 2]; a.reverse() !== a. Actual: ${reverse}`

The result was the following

FAIL array-with-primitives.js (default)
#2: a = [‘a’, ‘b’]; a.reverse() !== a. Actual: 2,1 Expected SameValue(«2,1», «2,1») to be false

FAIL array-with-primitives.js (strict-mode)
#2: a = [‘a’, ‘b’]; a.reverse() !== a. Actual: 2,1 Expected SameValue(«2,1», «2,1») to be false

This failure indicated to me at the same time that when the specification mentioned Set() that the current values of the array were being modified. Meaning a's contents were being directly changed when a swap occurs. This makes sense since the specification takes this as an argument to be made into an object. I’ve also realized after that when the object is return, this is similar to C++’s return this; statement, in which the address to the pointer that points to the underlying array in memory is being returned.

This has been confirmed when I investigated into SameValue(). SameValue(x,y) in this scenario should return SameValueNonNumber(x,y). SameValueNonNumber(x,y) states that if x and y are the same Object value return true. Otherwise return false. This further confirms my theory to be true.

However, there’s a second question that came to me. At a quick glance, I thought to myself, “Why did I fail 2 tests when clearly only one has failed?” The reason is because there are two tests that are being executed by the harness. The first test is executed in default mode. the second test is executed in strict-mode. Strict mode is a subset of JavaScript that removes all the features that are the typical leading cause to bugs.

The solution I have to this memory problem was to test each individual element separately.

a = [‘a’, 1, undefined, null];
reverse = a. reverse();
reverse[0], null, `#4: a = [‘a’, 1, undefined, null]; reverse = a.reverse(); reverse[0] !== null. Actual: ${reverse[0]}.`
reverse[1], undefined, `#4: a = [‘a’, 1, undefined, null]; reverse = a.reverse(); reverse[1] !== undefined. Actual: ${reverse[1]}.`
reverse[2], 1, `#4: a = [‘a’, 1, undefined, null]; reverse = a.reverse(); reverse[2] !== 1. Actual: ${reverse[2]}.`
reverse[3], ‘a’, `#4: a = [‘a’, 1, undefined, null]; reverse = a.reverse(); reverse[3] !== ‘a’. Actual: ${reverse[3]}.`

This worked, but it was tedious and clumsy.

3) When a test fails, all tests thereafter do not get executed.

This was a behavior that I did not expect. Considering the harness is a tool to check for the browser’s level of compliance, failing one tests shows “2 tests failed”. However, what confused me was that even after inducing 4 failed tests, the result was 2 failed tests. This is how I realized testing stops the moment a test fails. I’m not sure if this is intended behavior as running the harness with the original tests showed that the harness continued testing even after failing some tests. Comparing to tests written by others, I do not see anything different when compared to mine other than the method of testing I use being more up to date with the latest conventions. I’ve also considered if it is due to synchronous testing, but ruled it out when other tests did not use promises. I am not sure if this is intended behavior or simply a bug.

Step 7: Improving my tests based on inspiration from TypedArray.prototype.reverse

One thing I’ve noticed after observing all the files under test/built-ins/TypedArray/prototype/reverse/reverts.js was the use of a method called compareArray(). While it looks intuitive at first glance, I was curious to find out its inner working since I felt my existing tests could benefit from this function.

Using the “T” shortcut on GitHub, I was able to locate a file named compareArray.js under the harness directory. The function was straight forward and returns a boolean if two arrays are equal in their values. This is exactly the function I need!

I rewrote my tests. This is what it looks like now


Overall, I’ve learned how to get involved in the standardization process of the web, and in particular ECMAScript. I now know how to setup the software necessary for standards testing and how to make use of GitHub’s and to contribute to open standards. I’ve also learned more about how JavaScript behaves by doing this exercise. The specification for Array.prototypes.reverse() seemed cryptic at first, but after re-reading the pseudo code a few times, it became clear what its intentions were. It is satisfying to see that all the test you’ve written are passed by Node.JS. I got a glimpse of how great developers build simple functions and unit testing code that is only one to two lines long. It was incredibly easy to understand and pleasant to work with.