Mocking in PyTest (MonkeyPatching)

This blog post is a continuation of my last post regarding using PyTest and PyTest-Sanic plugin to test Sanic endpoints. Please skim through so that you are brought up to speed with the details of this post.

The main focus on this post is to learn how to mock a module that your test depends on. Mocking is a technique in unit testing. While the way mocking is done is the same across different tools and languages, there are some differences in methods and syntax used.

I will be using Python and PyTest to illustrate how this is done.

Here’s a scenario I have been presented with:

  • I have a Sanic Web Service that accepts POST requests passing in a file path
  • The web service will parse the path provided and instantiate an object that will return a list of files in the path. In doing so, it will use Boto3 to make calls to AWS infrastructure.
  • The web service will return data to the client via JSON object {‘files’: []} with a list of file names of files found inside the array.

The question is this, how can I test the functionality of this endpoint without having the object doing the parsing inside the web service making actual calls to AWS?

Why is making actual calls to AWS a bad idea?

Before we continue answering our question, I want to do a quick intermission to answer a big question you might have. Why not just let it call AWS and return results? How do you know that things work if you don’t actually make the calls?

Great question! The problem is that while AWS is a very reputable company for cloud services, there is no guarantee that they will be 100% available at all times. There is still inherent risk that the infrastructure may go down even if it is just 0.000001%. Another issue is making calls over the Internet is expensive. What I mean by this is it consumes your network, and the most valuable resource, your time. Allow me to exaggerate a bit to demonstrate my point; If you have 10000 test cases written for your application and each test depends on an AWS call, your tests have just consumed 10,000 calls using data and your/your company’s network to make those calls. Some services such as AWS API Gateway charge per request or have an allotted number of requests available to you. Now picture having multiple applications each with 10,000 tests that makes calls to AWS. This gets expensive really fast! Having tests run locally and immediately inform you if your application is behaving properly is paramount. A test that would’ve taken 500ms or even 1 second can now be done in just 6ms or less! That’s at least 83x faster!

Great! So, what is mocking?

I like to use the analogy of a parrot when explaining mocking. Just like a parrot mimicking human speech, mocking in unit testing is mimicking the response that you would expect from a module or a function.

We are swapping out the calls to AWS with the expected response that we would get from AWS depending on the scenario we are trying to test. This is great because we can focus on testing the implementation of our logic rather than dependencies such as network connectivity or the availability of AWS services.

How do we do it?

In your test file

  1. Import your Sanic application
  2. Import your module
  3. Create a class that will represent your response object
  4. Write a method method inside the class that will return the response. Declare it as a static method.
  5. Write your test function, accepting monkeypatch as a parameter
  6. Create a sub-function under your test function that will create an instance of your response class and invoke the response method.
  7. Apply monkeypatching to the module you want to mock, specifying the method to mock and pass the name of your sub-function that will return the response.
  8. Invoke Sanic app’s test_client and make a POST request to the desired endpoint.

Ok, that was a mouthful. Let’s see it in action.

# test_myApp.py
from myApp import app
from myModule import Module
import json

# This class captures the mocked responses of 'Module'
class ModuleResponse:
  @staticmethod
  def filesFound(*arg, **kwargs):
    return ['folder1/file1.pdf', 'folder1/file2.pdf']

  @staticmethod
  def noFilesFound(*arg, **kwargs):
    return []

def test_return_results(monkeypatch):
  async def mock_response(*args, **kwargs):
    return Module().filesFound()

  # 'searchFiles' is a method of Module that the endpoint will call
  # and also the method we want to mock. The 3rd argument passes
  # the function to return the mock results we want
  monkeypatch.setattr(Module, "searchFiles", mock_response)

  # Using Sanic app's test_client to make a post request to our endpoint
  response = app.test_client.post('/search', False, json={'path': 'folder1'})

  result = json.loads(response.body)
  assert response.status == 200
  assert result == {'files': ['folder1/file1.pdf', 'folder/file2.pdf']}

# Providing an empty path should return [] with 200
def test_search_empty_path(monkeypatch):
  async def mock_response(*args, **kwargs):
    return Module().noFilesFound()

  monkeypatch.setattr(Module, "searchFiles", mock_response)

  response = app.test_client.post('/search', False, json={'path': ''})
  result = json.loads(response.body)
  assert response.status == 200
  assert result == {'files': []}

# myModule.py
class Module:
  def __init__(self, searchPath=''):
    self.searchPath = searchPath
    self.files = []
    self.client = boto3.client(
        's3',
        verify=False
    )

  async def getFiles(self):
    return await self.__lookupFile()

  async def __lookupFile():
    ...

# myApp 

@app.route("/search", methods=['POST', 'OPTIONS'])
async def do_post(request):
  ...
  try:
    myModule = Module(searchPath)
    results = await Module.searchFiles()
  except Exception as e:
    return json({'error': e}, status=500)

  myResponse = {
    'files': results
  }
myResponse['files'] = results

  return json(myResponse)

As you can see, monkey patching is key! Monkey Patch replaces our implementation of Module’s searchFiles() method with our fake response.

I hope this guide has helped you gain more understanding with mocking and how to do it using PyTest.

If you think this has helped you in anyway, please help share this post and feel free to add me on Twitter @AlexLHWang.

Sanic Endpoint Testing using PyTest

Recently, I started working on my very first Python web framework at work called Sanic. While the framework is relatively easy to use, I could not say the same for unit testing as its documentation isn’t very clear to me.

If you’re having issues getting unit testing setup for this web framework, you have come to the right place.

Before We Begin

I want to put a disclaimer that I am using Sanic version 19.9.0 on Python 3.7.6 on MacOS 10.15.3; This implies that installation instructions will be biased towards Macs, but Windows or Linux installation of software should be similar with minor modifications.

What is Sanic?

Sanic is a python 3.6+ based web framework that allows you to create an HTTP server. Its strength lies in the use of Python’s new async and await syntax released in version 3.5 allowing non-blocking code for higher performance.

Things you need

  1. Python 3.6+ and pip3 installed on your machine
  2. Virtual Env
  3. PyTest

Setting up the environment

The workflow are as follows

  1. Install Python 3.6 or higher
  2. Install Virtual Env
  3. Install PyTest

Install Python 3.6+

The standard way of doing so would be downloading the latest version from python.org. However, if you use a Mac, you can install homebrew to get the job done.

At the time of writing, the latest version of Python is 3.8.1. To install using homebrew, execute brew install python3. Homebrew will automatically install the latest Python 3 available.

Install Virtual Env

Virtual Env allows you to lock down your dependencies to a specific version and is isolated from your main system. This ensures system packages do not conflict with your project’s packages.

  1. Execute pip3 install virtualenv in terminal.
  2. Create virtual environment virtualenv (ie: virtualenv myProject)
  3. Start the virtual environment source /bin/activate (ie: source myProject/bin/activate)
  4. Install dependencies using pip3 as usual.

To exit from virtual environment use deactivate command.

Install PyTest

  1. In the activated virtual environment, execute pip3 install pytest
  2. execute deactivate && source /bin/activate. This will refresh the virtual environment once the path for PyTest has been set.

Create the First Route of your Application

Example:

# myApp.py
from sanic import Sanic
from sanic.response import text

app = Sanic('myApplication')

@app.route("/")
async def main_route(request):
  return text('Hello')

if __name__ == "__main__":
  app.run(host="0.0.0.0", port=1234)

Creating a Simple Test

The biggest issue I had was trying to understand importing my web service application into Sanic test.

I placed my test at the root directory of my project folder co-locating it with myApp.py just for proof of concept. In the future, I intend to put them under a “test” folder.

PyTest first look at any arguments passed on the command line when executing pytest, then inside config files for testpaths attribute before looking at the current directory recursively for files named test__*.py and *__test.py.

To create a simple test, I co-located my test next to myApp.py

# test_myApp.py
from myApp import app

def test_default_route():
request, response = app.test_client.get('/')
result = response.body
assert response.status == 200
assert result == 'Hello'

The method I used to import my web service application is via relative imports. When code gets imported into the test file, the code is automatically executed (similar to the way JavaScript behaves). This means an instance of the Sanic application is instantiated in memory. Thus, you can use it immediately to test the endpoint.

Sanic object instances contains a method called test_client which mimicks an HTTP client making calls to an endpoint using HTTP methods. In my example, I used the get method going to the main route. You can of course change the HTTP methods to match your needs.

I hope this has helped you get started in endpoint testing with Sanic. I will be adding subsequent blog posts as I learn more about this topic and learn from best practices.

How to remove Dell U2711 Ultrasharp monitor stand

Background

Around 6 years ago, I bought my very first professional grade monitor. It had the best in class colour accuracy for 2012, an IPS panel; Something quite new at the time. It was revolutionary. The cost, was also revolutionary at a whooping $800 during a sale with the MSRP at over $1000. The monitor came in the box with the stand already attached to the monitor.

6 years later, I’ve decided its time to put it on a monitor stand that has more adjustable settings for better ergonomics. I went online to search for the long forgotten manual via Google and was shocked to find that there were no instructions on how to remove the stand from the monitor! If you’re reading this right now, you’re probably in the same situation.

I gave Dell a call to see if their technicians would know. The first thing the technician said after telling him about my problem was “Ok sir, this is a five year old monitor. Let me go on Google to see if I could find the manual and find out how to do this.” I thought, perhaps the Dell technician has special magical powers in Googling that I didn’t know about. So I waited about 5 minutes for him to find information. He comes back and says “Sir, the manual does not seem to provide information for how to remove the stand from the monitor. Let me see if I can find from our database information for how to remove the stand from this monitor”. I thought, “Cool, perhaps there maybe some secrets hidden in Dell’s very own database that will save the day”. Another 5 minutes go by and the technician returns, “Sir, I’m so sorry, but it seems that our database does not have information on this either”. Like any concerned consumer, I followed up with “Ok, so what are my next steps? What can I do about this?” He says “Well, you can try to ask a friend who has done this before how to do this.” Steam was pouring out of both sides of my ears at this point. If I had a friend that knew how to do this, would I be calling Dell technical support in the first place?! I told him as calm as I possibly can “Well, I don’t have such a friend”. His response was “Well sir, in this case, you can continue to try on your own because there’s nothing much we can do on our end.” I decided that hassling him would do us no good. So I thanked him and went on my way.

I followed the fellow’s advice and after some trial in error, I figured it out myself. I hope that this guide will be useful to you.

This experience above has left me with a sour taste in my mouth. I had hoped that a big company like Dell would have systems in place to handle situations like this. Perhaps an escalation of the issue from the representative to an actual engineer who may know how to solve this problem or giving proper training to employees about the general designs of the monitors and common questions that may come up.

What I did not include in the transcript was the fact that the representative admitted that I was on the line with the technical hardware team. Yet, the irony of all of this was that the “technical hardware team” did not have the answers to the company’s very own products, instead relying on Google to find its own product’s manuals; It reflects that the management team is not providing its people with the tools and training to succeed.

This leaves a lot more to be desired. I sincerely hope that this company would work on a strategy to turn this around.

Items You will need

* T20 Panhead screw driver

Steps to remove the stand

1. Prepare a soft smooth surface for the monitor’s screen to rest upon.
2. Place the monitor flat on the smooth surface prepared in step 1 with the screen facing the surface.
3. Take the T20 screw driver and remove the four panhead screws that is attaching the stand to the monitor.

That’s it! Pictures are included in the appendix section below to make things more clear. I hope this has helped make your day better.

Appendix

T20 Panhead Screw driver:
1.jpg

4 Panhead screws holding the monitor stand
2.jpg

Panhead screw front shot
3.jpg

The monitor stand should be facing down during removal as shown below:
4.jpg

Top two mounting holes on the monitor stand highlighted by red circles. There are two additional holes at the bottom that is hidden from view:
5.jpg

[PR] Debugger.html: Add a test for pause on next (Mochitest)

Issue#5446, Pull Request#6058.

This is a pull request blog post. Please see my blog post on Mozilla devtools-html/debugger.html for more information about the project.

What’s the bug about?

Project maintainer @JasonLaster would like to add a mochitest that tests clicking the pause button on the debugger results in the debugger pausing on the next execution.

Chosen growth

The goal of this release was to practice writing unit tests for an opensource project. Unit testing has been a primary concern for me since I’ve had exposure to it about a year ago. I value quality software and understand that it is significant in both private companies and open source communities. Therefore, it is something that I would like to make second nature to me when I program. This was the perfect opportunity.

Adding the test

The first step to fixing a bug for any opensource software is to fork the repository. Then, clone the forked repository onto the local machine and create a new branch:

git checkout -b issue-5446

The second part is learning more about mochitests. In short, mochitests are unit tests that use the MochiKit platform (an opensource project that Mozilla uses to test the Firefox browser). Using mochitests, developers can programmatically simulate actions that users will perform on the Firefox debugger.

Following the instructions on the mochitests documentation, I’ve setup mercurial (a git like version control system) and autoconf213, a script that produces shell scripts that configures source code packages.

Next up is downloading a copy of Firefox into the repository and configuring links between debugger.html/ directory and Firefox’s test directory. Luckily this can all be done using a simple command ./bin/prepare-mochitests-dev

Once the apparatus was setup, it was time to begin researching about how to write mochitests. @JasonLaster has given me some hints by pointing me to a specific test to look for. He suggested that I look at browser_dbg-breaking.js inside src/test/mochitest directory.

@JasonLaster has outlined the steps the test needs to perform in the issue. The steps were as follows:

1. Load a page
2. click the pause button on the debugger
3. Eval a function
4. Assert that we are still paused

The first problem I encountered as that the pause button did not exist.

This was resolved by changing the debugger’s settings:
1. In the debugger’s console execute: dbg.features
2. set “remove-command-bar-options” to false

The second part was understanding how to write the tests. The most difficult parts being clicking the pause button on the debugger and evaluating a function. After getting some clarification, I proceeded to experiment with the API to find out the relevant functions to use. The API for the tests were located in src/test/mochitest/head.js.

Based on intuition, I first attempted with pressKey(dbg, "pauseKey"). This simulated pressing on the pause key on the keyboard. However, this did not work. There was also no obvious hints I can grasp from the API at that point, so I turned to using git grep to help me find the use of the word pause in other tests under src/test directory to get more hints. Professor Humphrey has taught me to use git grep -C numberOfLines searchPhrase path as a means of being able to see the number of lines surrounding the search phrase.

While experimenting with the API, something really odd had occurred. The symbolic linking had suddenly been undone, which lead to the harness completely break. Unfortunately, this only occurred one time and so I was unable to reproduce the problem or find the root cause of it.

I tried deleting the entire directory, re-cloning the repository and initiating the configuration process. However, during the configuration process, there were error messages showing that I did not understand. After googling around, I was unable to find answers as the problem is specific to this project. I reached out on slack to get assistance.

@JasonLaster mentioned in the chat to debug issues with the harness, go into the firefox/ directory inside the project folder and run ./mach mochitest --headless devtools/client/debugger/new.

After following those instructions, I was able to diagnose that my computer was missing the Rust compiler. To fix that issue, I ran ./mach boostrap to try to get the compiler installed, but ran into another error. I was suggested to run ./mach configre first.

After bring presented with four flavors of firefox to select from, I was instructed to build using Firefox for Desktop Artifact Mode. From there on, I re-ran ./bin/prepare-mochitests-dev and after 5 minutes later, I was back in business.

Not wanting to waste anymore time, I turned to looking at test files individually based on the relevance of their names to gain additional insight. I saw some UI tests using the function clickElement(). I looked inside head.js to look for the definition of clickElement. I’ve found that it takes the debugger instance as the first argument, but then a selector as the second argument. The list of selectors were defined as a dictionary. Scanning the list of selectors, I did not see one for pause even though one for resume existed. The value for the resume was “.resume.active”. I began taking guesses that this may have to do with the user interface.

I initiated an instance of the debugger to confirm my hypothesis. Indeed, after using the inspector on the DOM, I was able to see that “resume” and “active” were both CSS class of the resume button. I then inspected the pause button to see what that said and it turns out to be “.pause.active”.

Executing the test with await waitForTime(15000) gave me a 15 second window to play around with the pause button under test mode to see outputs that were being logged to the console. In particular, when I clicked on the pause button, “1 BREAK_ON_NEXT” appeared. This was a hint for a command that I should listen for to ensure that the pause button was clicked!

I went back to head.js and added the property “pause” and value “.pause.active” to the selectors dictionary.Then, inside my test, I added await waitForDispatch(dbg, "BREAK_ON_NEXT").

However, once I’ve applied this, I began getting errors about cyclical references. It took a while for me to understand what was happening. I got additional help from @bomsy who confirmed that I was on the right track with defining a selector for paused to simulate clicking the pause button.

After much digging, I realized the problem came from my improper use of invokeInTab(). I had passed the debugger into the function which lead to self-invocation to infinitude.

The next step was to figure out how to eval a function. According to @JasonLaster, I should be able to use invokeInTab() to simulate executing a function from console. However, this did not seem to work. The test repeatedly timed out as it waited for the next action to execute after the pause button was clicked.

Since I was unable to trigger this myself via the console, my temporary work around is to execute the function as an expression in the watch window, which is considered an execution to trigger pausing the debugger.

I was able to find an example of this under src/test/mochitest/browser_debugger-expressions.js.

In addition, I’ve added an extra line of code to tell the debugger to wait for the debugger to pause to give it time for the pause effect to kick in; All of the test code is executed asynchronously and therefore required await if it needs to wait for the command to resolve. Otherwise, race conditions would occur and the test may fail due to code execution not yet completed.

I have since submitted a pull request and currently waiting for @JasonLaster to review the pull request. Additional updates will be made to this post at a later date once the review has been completed.

What I learned

While writing a test seemed very simple, a lot of the time was spent setting up the test harness and understanding the available API. While the Firefox Debugger team had documentation on mochitests, there were some areas that were unclear. In particular, there were no instructions on how to troubleshoot issues when Firefox does not compile. Inside head.js, some functions were not documented and required inference and guesses on what those functions do.

I felt that I’ve done better this time around having reached out to the community for help a lot sooner than my first pull request. I’ve practiced using git grep to help me find specific keywords that I am interested in while searching for example usages of the test harness API that I was learning. Furthermore, I’ve expanded this concept to looking at test files that other developers have written to understand the usage of other commands, which I may not have been aware of initially.

What surprised me the most is how the test harness can suddenly break and the importance of documentation and community to help opensource developers get through the hurdles of having the environment steady enough for them to tackle the real issue at hand. I saw first handed how big opensource projects can get and that work is never complete, just more work to be done.

[PR] Brave browser: Fix URL issues with Brave Browser using Test Driven Development

Issue#13897, Pull Request#13898.

What’s the bug about?

This week in class, my partners @irrationalRock, @Woody88 and I worked on a bug in the Brave browser. The bug was about the way the address bar parsed urls. Our goal was to first write the tests for for the bug, then create fixes that address the issue.

We looked at the following scenarios:

General strings
• “dog”
• ” dog ”
• “dog cat”
• ” dog cat ”

URLs with query string
• “https://www.google.ca/search?q=dog”
• ” https://www.google.ca/search?q=dog
• “https://www.google.ca/search?q=dog cat” (failed to properly set search query string)
• ” https://www.google.ca/search?q=dog cat ” (Failed to properly set search query string)

File paths
• “/path/to/file/dog cat.txt” (failed to render)
• ” /path/to/file/dog cat.txt ” (failed to render)
• ” C:\Path\to\file with space.txt” (failed to render)
• ” C:\Path\to\file with space.txt ” (failed to render)
• “file:////path/to/file/dog cat.txt”
• ” file:////path/to/file/dog cat.txt ”

We’ve come to the conclusion that whenever there is a space in a URL’s query string or a file path (Windows or Unix), the browser would take the entire string and perform a google search.

After examining the urlutil.js, we began writing a few tests inside test/unit/lib/urlutilTest.js. The test does the following:

• Ensure prependScheme() is able to prepend the file scheme in front of a Unix file path

• Ensure prependScheme() is able to prepend the file scheme in front of a Windows file path

• Ensure URL encoding works for space characters in absolute paths

• Ensure URL encoding works for space characters in URLs with query string

We made sure the test would fail after writing the test first then went to hunt for the bug.

Looking for the bug

After cloning the repository and installing the node packages via npm install.

We started the debugging process by executing npm run watch followed by enabling the debugger in VS Code. Looking inside js/lib/urlutil.js, we examined the way isNotURL() parsed a string to determine if it is a URL or not.

This is important as the software’s workflow in the following order:

1. Determine if it has a valid scheme (i.e. Http://, Https://, file://)
2. Determine if it can potentially be URL
3. Trim the input.
4. Prepend the scheme if it is not already provided.
5. Check if the URL can be parsed and send it to the view.

@irrationalRock noticed the issue lies in the regular expressions used in isNotURL().

const case2Reg = /(^\?)|(\?.+\s)|(^\.)|(^[^.+]*[^/]*\.$)/

The intent of the second regular expression was to filter out inputs that match the following cases:

  • starts with “?” or “.”
  • contains “? “
  • ends with “.” (and was not preceded by a domain or /)

However, in doing so, the case (\?.+\s) ended up filtering URIs with legitimate query strings. The “.+” of the regular expression represents one or more non-newline characters; The original intent however, was to filter out the case where the input contains “? “.

The solution was to change this regular expression to the following:

const case2Reg = /(^\?)|(\?\s+)|(^\.)|(^[^.+]*[^/]*\.$)/

By modifying that particular part of the regular expression to (\?\s+) we corrected the problem by specifying the filter to only engage if it detects a “?” character followed by one or more spaces.

This resolved the issue with a space in the query string. While we originally explicitly converted the space character into %20 to encode the URL, Professor Humphrey has noted that the URL encoding may occur implicitly when it gets sent back out to the browser. We did some extra investigation, and indeed, once the input string reaches getUrlFromInput(), return new window.URL(input).href on line 160 of urlutil.js will automatically URL encode the space character.

This resolved our very first bug.

Resolving Unix absolute file path with space in file name issue

The second problem was Brave interpreting UNIX absolute file path with a space in the file name as a string for google search. While tracing the way Brave parses an input string of such case, I’ve noticed that the regular expression in isNotUrl() has once again over aggressively classified it as a non-url before the file scheme can be appended to the input string.

The problem lies in line 134 of urlutil.js

if (case2Reg.test(str) || !case3Reg.test(str) ||
(scheme === undefined && /\s/g.test(str))) {
return true
}

In the case of an absolute file path with a space in the file name, scheme === undefined && /\s/g.test(str) will classify it as not a url.

I first consulted with Google Chrome and Firefox to see how these two popular browsers handled UNIX absolute file paths with space in the file names and what occurs when bogus paths were provided. This was done by directly testing inputs in these browsers as well as reading the opensource code for both projects.

It turns out that both browsers assume that if a string begins with a forward slash, it is a UNIX absolute file path regardless the path actually exists or not.

My solution then was to check if the input string begins with a “/” character; The regular expression: /^\/ resolved the issue.

Resolving Windows file path with name in file path issue

The last issue to tackle was same issue as the UNIX absolute file path issue, except this time on the Windows platform. The difficulty lies in the fact that Windows uses drive letters and backward slashes.

Due to the similarity with this issue and the UNIX absolute file path with space in file name issue, my proposed solution was to add on top of what I’ve already written. I refactored the code I’ve written previously and instead created a new regular expression case.

const case5Reg = /(?:^\/)|(?:^[a-zA-Z]:\\)/

The “?:” symbol along with ( ) to contain the regular expression represents a grouped regular expression without tracking. In addition the “|” symbol represents an OR statement followed by regular expression that will detect any windows drive letter.

This allows a Windows file path to be treated as a URL. However, doing this alone is not enough. I went to research on how a windows file path is represented using the file scheme.

Once I’ve understood how a URI should look for a file scheme with windows path, I’ve added a windows file scheme constant that detects windows drive letters in a case insensitive manner. Inside prependScheme(), I’ve added additional logic to check for Windows file path.

Once a Windows file path is detected, it will use regular expression to replace backward slashes with a single forward slash.

if (windowsFileScheme.test(input)) {
      input = input.replace(/\\/g, '/')
      input = `${fileScheme}/${input}`
    }

After rerunning all the tests using npm run test -- --grep="urlutil", I was able to verify that all test cases (including the test cases I’ve written) passed. Just to be safe, I launched Brave to test each fix and indeed all of the issues mentioned have been fixed.

Conclusion

The test driven development style of programming is new for me and took a bit of time getting used to. It is difficult to visualize what a fix would look like before I have the code already written. However, it pushed me to read the underlying code and explore how Brave operates, the functions traversed while it parses the URI string that the user enters into the address bar, which gave me an idea for how these tests should look like.

Furthermore, I’ve learned to read code that other developers have written for other opensource projects. As a result, I saw new ways of writing code that I have never seen before. This gave me the opportunity to understand how to write better and concise code.

Debugging bundled and transpiled code using VS Code Debugger

TLDR

Debugging a web application built using WebPack

In a recent project that I’ve been working on assigned by my professor, he showed us really cool developer tools. In particular, one of them was called WebPack. The idea is to have a “compiler” that packs an entire web application into a single JavaScript file, which can be executed by browsers. This is important if one is running code that uses transpilers such as babel.

While this new tool is truly innovative and offers great advancements in software development workflow, the question begs “How do I debug these apps?”

The Problem: Web apps that have files packed needs to be compiled first

Based on my limited knowledge of Node.JS, the straight forward approach of debugging any Node.JS app is to use a powerful editor such as VSCode and add the configuration for launching the program by targeting index.js or app.js.

A problem you will quickly encounter is if the application uses babel and uses JavaScript features that are not yet implemented in Node.JS, VSCode will throw an error. For example:

Unexpected token imports

If the code is packed with Webpack using all JavaScript features supported by Node.JS, then the app just wouldn’t start with no error messages given.

And if that didn’t work, the next option would be to use the Launch via NPM method.

In order to debug apps like this, a source map is required to allow the debugger to reference where in the individual source files, that single line is referring to. This is  generated by WebPack (with a special option enabled) when the packing takes place.

Furthermore, advanced JavaScript features that are not yet available on the current version of Node.JS also needs to be transpiled first, which occurs when the code is assembled. Usually, this involves having the web application already running (i.e. via webpack-dev-server).

Attempt using Launch via NPM method resulted in the following:

Solution: VS Code Google Chrome Debugger plugin

After consulting with Professor Humphrey, he pointed me to VSCode Chrome Debugger plugin. The idea is start the app in the terminal as usual, then have VSCode use an instance of Google Chrome as a proxy to run the app.

Once the code hits a break-point, Google Chrome will report back the current state of the app and the results will be displayed on VS Code as if you’ve never left VS Code at all. It truly is magic.

Step 1: Set your desired breakpoints

I like setting the breakpoints ahead of time, but doing this step last can’t hurt either.

Step 2: Configure VS Code Debugger

Click on the Debugger icon and in the drop down menu next to the word DEBUG, select Add Configuration, then select Chrome: Launch.

Check to ensure that the port in the url section matches with your app’s configurations.

Step 3: Launch the app in terminal

Bring up your terminal and launch your web application. Typically, this maybe

yarn start
yarn debug
npm start
npm run debug

Step 4: Launch Google Chrome from VSCode Debugger

From VSCode’s debugger page, press the green arrow button to launch chrome.

Once the code hits the break-point you’ve specified, VSCode will activate and pause on that line of code for you to inspect! Happy debugging.

Other solutions

I wrote about this method because it was the easiest method for me to learn. There are also other methods to debug web apps like such outline by Microsoft here.

[PR] Debugger.html: Quick open style selected matches

Issue#5679, Pull Request#5750.

This is a pull request blog post. Please see my blog post on Mozilla devtools-html/debugger.html for more information about the project.

What’s the bug about?

The current implementation of “quick open”, a command pallet like feature in the debugger tool uses bolded black text to highlight words that fuzzy match letters the user has typed into the quick open search box. However, when an item is selected, the current visual aesthetic of the item’s text does not conform to Mozilla products such as Firefox where the text on a selected item appears white and fuzzy matched letters as bolded white text.

At the direction of @violasong, a Mozilla UI designer, the goal is to make the fuzzy matched letters appear bolded white.

Fixing the bug

The first step to fixing a bug for any opensource software is to fork the repository. Then,  clone the forked repository onto the local machine and create a new branch:

git checkout -b issue-5679

The second part is being able to work through all of its dependencies and have the application compile and run. The repository has very extensive information regarding the setup of the application. Once setup was complete, go ahead and explore the application as a first time user.

In order to fix a problem, one must also understand the problem. This means knowing how to reproduce the scenario for which the selected item in the quick open produces the black bolded text. Being able to trigger the quick open panel also means having a method to trace the location of the source code file of interest. Due to the similarities between the debugger tool and Firefox’s native debugger (since this debugger is actually used in Firefox as well), I got confused thinking that I’m supposed to open my native Firefox debugger to open quick open.

Luckily, the Mozilla team had a slack channel. @anshulmalik, a team member of the repository, asked for steps to reproduce to figure out what steps I was missing to figure out the issue.

I realized that in order to get the command pallet to load as expected, I needed to be inside the debugger app browser window. When the app launched initially, it had a button called “Launch Firefox”. The browser window also looks different (it has an orange background for the address bar) so I thought that was the debugger. It turns out, that was the debugger’s browser to allow browsing of projects. The browser window which I pointed to localhost:8000 was the actual debugger. I needed to select projects on the debugger window in order begin debugging the project.

From there on, using Firefox’s native html inspector, I was able to pinpoint the html code associated with the command pallet.

There were two things that stuck out at me when I inspected the HTML element. A <div> with the classes selected and result-item which is associated with the highlighted item. Under that node is a title node followed by nested div node which contains a mark element. The mark element has a class named highlight. The mark HTML element contains the letter “t”, which happens to be the same letter I typed into the search box.

The next step was to find the name of the source file that controls this rendering. I went straight for the issue tracker and started searching for closed issues, looking for UI related issues and issues with regards to searching. During my search, I’ve found a particular issue related to the UI component I was working on. I dug deeper into the related pull request. The pull requests contains a files changed tab, which will give me clues as to the files that were modified by the contributor. A really interesting file stood out at first glance. It was named QuickOpenModal.js.This file was located inside src/components directory. When I located this directory inside the project folder, I saw QuickOpenModal.js. Bingo! Light bulbs went off in my head. At the same time, I realized that what I had been calling command pallet was actually called Quick Open.

After some experimentation, I was able to create the fix.

.selected .highlight {
  color: white;
}

Once the updates have been pushed to the forked repository, I proceeded back to the repository and immediately I was greeted with the option to perform a pull request.

Clicking on the Compare & Pull request button presented me with a preformatted form to fill out which states the issue number, what I’ve fixed, and a test plan which entails a bullet-pointed list of how the maintainer can test to see if the fix was successful. In the request form, I’ve also included a screenshot to show the final result:

From there on, everything was history!

What I learned

I think the most important thing to mention is that opensource was not as scary as I had initially thought. I want to thank @jasonlaster for assigning me the bug and @anshulmalik for helping me through the issues I’ve encountered. I also want to thank professor Humphrey for encouraging me to choose a bug and introducing me to the Mozilla team.

It was a valuable experience learning the steps needed to contribute to any community; It begins with doing research about the project and understanding how the community works. This means reading the documentation made available and trying out the project. While there’s a tendency to think software developers work in isolated cubicles, working with opensource opened the doors to learning how to work with people across different cultures and geographic areas. Once the socializing settles, it is time to search for issues to tackle, then target the related source file and providing a fix.

I’ve also found GitHub pull request and issue templates to be interesting. Just by observing how other projects are managed allowed me to learn tremendously about opensource. Furthermore, knowing that GitHub has specific keywords that can be used as pull request titles, descriptions, and even commit messages to automatically close issues was an eye opener. This allows anyone to streamline managing issues, making the opensource experience even better.

Mozilla is also a major user of Eslint and Prettier. In fact, their Prettier configurations checks all files, even CSS files! That was shocking for me.

Issue titles are very important. What I thought was Command Pallet was actually Quick Open. Had I taken more notice of my issue title, I would realize that I could use that as the search word when searching through closed issues to find relevant closed issues sooner.

Steps to reproduce was, is, and will be a very important thing when tackling any bug. Being able to solve a problem means the individual needs to be able to encounter the problem first. This is true when listing an issue for others to resolve for one’s own projects or when describing a problem to a team member or mentor of a project you’re contributing to.

Finally, I’ve learned to ask for help. Having a great team such as Mozilla with mentors willing to extend their help gave me confidence to tackle tough bugs in the future. Asking for help means being able to pin point areas of interest faster. Furthermore, the folks with more experience can provide you with resources that you may not know enough about to google, which can help advance your knowledge when tackling a difficult problem.