Software is hard http://www.softwareishard.com/blog More musings on software development Fri, 09 Aug 2024 12:19:39 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.30 Modern React Component Testing with create-react-app, Jest, and Enzyme http://www.softwareishard.com/blog/testing/modern-react-component-testing-with-create-react-app-jest-and-enzyme/ Wed, 28 Jun 2017 14:11:19 +0000 http://www.softwareishard.com/blog/?p=1181 This post is written by Charlie Crawford who teaches for appendTo, who offers React Training Courses for developer teams.

There are many things to love about React, but one of the biggest pain points in React is project bootstrapping. As React takes a modular "roll your own" framework approach, it can take some time to get your project boilerplate up and running. Thankfully, create-react-app has come to the scene with powerful configuration free React boilerplate. While create-react-app tries to remain fairly agnostic and unopinionated, over time more and more functionality has been introduced into the project. Specifically, testing has progressed with the new revamped version of Jest (The "official" Facebook React testing tool) That being said, Enzyme (A popular third party React testing library by AirBnB) is still a vital part of the React testing stack. It can be a little unclear how create-react-app, Jest, and Enzyme should work Together. The official guide offers some insights on how to load Enzyme into your project, but doesn’t really explain the role Enzyme plays. Let’s change that.

What Enzyme brings to the the table

Enzyme brings all sorts of utility functions for testing React applications. However, the biggest benefit of using Enzyme is "shallow-rendering". Shallow rendering allows you to render a component, without rendering its children. This allows proper unit testing of React components. Let's say you are writing unit tests for a function a which invokes function b. If they are true unit tests - and not integration, end-to-end, or some other form of testing - the implementation of function b should not affect the unit tests of function a. Analogously, if you have React component A which renders sub-components B and C, you don't want the implementation of B and C affecting the unit tests for component A. Shallow rending is what makes this possible. Let’s see it in action with the beginnings of a typical Todo List app.

The app we will be building

We will be using vanilla React for this application. While this could easily be extended to Redux or other state management solutions, we will keep this tutorial focused on component testing. The app will consist of an App, TodoList, and Todo component. We will be bootstrapping this app with create-react-app, and will be using Enzyme and shallow rendering to facilitate writing true unit tests for the todoList component.

To begin, globally install create-react-app using yarn or npm. After installing, use the create-react-app to bootstrap the new application.

npm install -g create-react-app
create-react-app todo-testing

 

cd into the todo-testing app directory and install enzyme

cd todo-testing
npm install --save enzyme react-addons-test-utils

 

cd into the src directory, and create the few files

cd src
touch Todo.js TodoList.js Todo.test.js TodoList.test.js

 

Go into App.js component, and tell it to render our future TodoList component.

import React, { Component } from 'react';
import './App.css';
import TodoList from './TodoList';
 
class App extends Component {
  render() {
    return (
      <div className="App">
        <TodoList/>
      </div>
    );
  }
}
 
export default App;

 

Next, let's write the TodoList Component.

import React, { Component } from 'react';
import Todo from './Todo'
 
class TodoList extends Component {
  constructor() {
    this.todos = [];
  }
 
  addTodo(todo) {
    this.setState({todos: this.todos.concat(todo)});
  }
 
  render() {
    return (
      <div className="App">
        <p>Our Todo List</p>
        {this.todos.map(todo => <Todo todo={todo}/>)}
      </div>
    );
  }
}
 
export default TodoList;

 

Before writing test cases for TodoList, let's stub out a Todo component
so our code will compile.

import React, { Component } from 'react';
 
const Todo = () => (
  <div></div>
);
 
export default Todo;

 

We can now write a testcase in TodoList.test.js

import React from 'react';
import { shallow } from 'enzyme';
import TodoList from './TodoList';
 
it('renders "Our Todo List"', () => {
  const wrapper = shallow(<TodoList/>);
  const textHeader = <p>Our Todo List</p>;
  expect(wrapper.contains(textHeader)).toEqual(true);
});

 

npm test
Formatted test output

 

In addition to tests that use shallow rendering running much faster, the implementation of subcomponents largely won't affect the result of the unit test for this component. For example, let's change the implementation of the Todo component, and rerun the test

import React, { Component } from 'react';
 
const Todo = '';
 
export default Todo;

 

npm test
Formatted test output

 

Despite Todo not even being a real React component, our unit test for TodoList still runs and passes! Now you can achieve truly isolated unit testing of your components, and improve the reliability of your React applications.

 

]]> Why Load Test? http://www.softwareishard.com/blog/testing/why-load-test/ Tue, 21 Feb 2017 11:53:08 +0000 http://www.softwareishard.com/blog/?p=1159 Tips on why you want to load test your website, web apps and API’s in 2017. Plus - a few tips on setup and implementation.

This post is written by Jaymi Tripp from Dotcom-Monitor.

1. You are expecting an influx in traffic or sales

If you know that you will see an increase in visitors to your website, load testing is crucial, and no website is invincible. In 2003 we saw with Amazon in a situation that ended in legal issues and server overload when someone entered incorrect data for the price of some popular electronic items at the time. Even the government is susceptible to crashes. We all remember this after the launch of Obama Care with the incredible page load times and constant glitches. Rumor has it that the site never went through any load testing scenarios and there was no information on what its capacity actually was.

2. Insight

Knowing what you need to ensure your site performs under pressure is extremely valuable. This allows you to have the tools in place to ensure your website can weather the storm. The element of surprise is never appreciated in website performance. Insight into the functions and performance of your website will ensure happy customers and a website that functions as expected.

3. You want to know what your user experience is like

When using cloud based load testing software you can be given the ability to not only test, but to also record video of the test in progress. For example, on LoadView-Testing.com, a cloud based load testing platform– you can record the experience as your users see it and test right down to the very element on the page. LoadView uses the EveryStep Automation Tool to do this. Best thing about the EveryStep Script Recorder? It’s free and also extremely easy to use. Below is a screen shot of the interface while recording a script.

4. You want to identify third party issues when under heavy load

Perhaps you are using a CDN to host all of your images, or you have a chat function through a third party vendor. It is important that your visitors are able to use and see the features of your website regardless how many users are accessing it. Load testing will allow you to know just which applications are failing to perform and in turn, hold up their end of your service agreement.

5. You have an e-commerce website

If your website is your bread and butter, then you better make sure it functions. This is especially important for websites with a shopping cart. Orders not delivered or sent to wrong address, customers charged for items they never bought, all of these could be side effects of heavy traffic. Your conversion funnel should be set up for load and stress testing on a regular so you know how it is functioning during peak times. This gives you great insight into why your customers could be abandoning your site.

6. You did some major (or minor) updates

Just like I mentioned above with the Amazon pricing mishap, this can also take other forms, such as a recent redesign or platform update. Establishing baseline performance markers and seeing what your pages look like upon load, in real browsers to real users will lower the risk of failure. After installing an update, those performance markers can help narrow the possibility that your updates caused the performance of your website to actually falter.

Tips when setting up load testing

1. Use an outside source

This will allow you to test from multiple locations around the world and simulate actual traffic more accurately, as well as keep your in-house costs down. Load Impact, as well as LoadView, Loader.io and BlazeMeter all allow for you to set up testing locations around the world and many of them at the same time.

2. Ensure your service of choice provides you with actionable data

What good is load testing if you don’t have enough information to take action after testing completes? While all services will give you reports, very few offer customizable reporting. LoadView offers a nice drag and drop reporting dashboard (seen here) for adding graphs and reports, Load Impact will go as far as letting you create custom graphs. Both let you export your data for every single request made, which sets these two platforms apart from the others who offer no customization at all.

3. Choose a provider with the support you need

If you need a higher level of support, be sure to choose a company that offers it. Using LoadView, I learned they have a full support staff, a helpful knowledge base and a series of video tutorials that are a big help.

4. Know what services your platform utilizes

For instance, Blazemeter uses Amazon Web Services and Google Cloud Platform Live, while LoadView uses both of those services, they also utilize Rackspace MyCloud. From what I can see, it is one of the few that utilizes three platforms instead of the standard two.

5. Define baseline performance metrics

This is critical in any area of online testing. Baseline metrics will establish where you are now, when a load test starts to impact the performance of your website or apps and then finally when it goes into failure. Another thing you might want to consider, what would constitute as failure for your website or applications? Establish these metrics right away so you know what you are looking for in terms of performance. Using LoadView, I have been able to store historical data since opening my account, which has been extremely helpful when reporting.

Conclusion

In 2017 you are hard pressed to find a reason NOT to perform regular performance checks on your websites, web apps or other online devices. Most of the services mentioned above also come with an entire suite of monitoring solutions. LoadView is a part of the Dotcom-Monitor load testing platform, therefore every account has access to web page speed tests, mail server testing, streaming video test tools, ping testing and much more within their dashboard. Testing frequently and thoroughly is the only way to ensure your website and web apps are up and running even when you are not looking. Customize your alerts to send you text messages, emails or other forms of notifications so you and those responsible can respond quickly to problems before it affects your customers or bottom line.

Jaymi Tripp

]]>
Inspecting WebSocket Traffic with Firefox Developer Tools http://www.softwareishard.com/blog/planet-mozilla/inspecting-websocket-traffic-with-firefox-developer-tools/ Mon, 11 Apr 2016 13:01:07 +0000 http://www.softwareishard.com/blog/?p=1102 WebSocket monitor is an extension to Firefox developer tools that can be used to monitor WebSocket connections in Firefox. It allows inspecting all data sent and received.

It's been a while since we published first version of our add-on for inspecting WebSocket traffic and it's good time to summarize all new features and show how it's integrated with Firefox Developer tools.

Download signed version of this add-on from AMO. The source code with further documentation is available on github.

Update 2019/10/21: New WebSocket inspector has been released in Firefox 71

WebSocket Monitor can be used to track any WS connection, but following protocols have an extra support: Socket.IO, SockJS, Plain JSON, WAMP, MQTT.


(click to enlarge)

Introduction

After the add-on is installed, open Firefox Developer Tools (F12 on Win or ⌥⌘I on OSX) and switch into a new Web Sockets panel. The panel displays list of frames sent/received by all WebSockets connections on the current page as well as Connect and Disconnect events.

The screenshot above shows one Connect event, one Sent frame and one Received frame. There is also a summary at the bottom of the list showing number of frames in the list, total size of transferred payload and total time since the first frame.

The screenshot below shows content of the side panel that displays all details of the selected packet.

Filtering

The extension allows simple filtering of the frame list. It's possible to filter using a keyword where only frames with the keyword in the payload are displayed. Or you might pick a connection ID and see only frames sent/received through that connection.

Protocols

WebSocket Monitor allows inspecting any WS connection, but there is an extra support for the following protocols:

These protocols introduce an extra side bar with parsed payload. See the next screenshot that shows parsed SocketIO frame payload as an expandable tree allowing quick inspection.

Table and Chat Perspectives

There are two ways how to visualize frames. Apart from the Tabular View (see the screenshot above) there is also a Chat View that uses well know 'user-chat' approach (used in various messengers).

Inline Data Preview

Both perspectives offers also an inline data preview. You don't have to always select the frame and go to the side bar, just open the data directly in the frame.

There are more small and nifty features so, don't forget to checkout our wiki if you are interested!

Resources

]]>
Pixel Perfect 2, Developer Tool Extension Architecture http://www.softwareishard.com/blog/extension-architecture/pixel-perfect-2-developer-tool-extension-architecture/ http://www.softwareishard.com/blog/extension-architecture/pixel-perfect-2-developer-tool-extension-architecture/#comments Tue, 31 Mar 2015 16:01:06 +0000 http://www.softwareishard.com/blog/?p=995 I have been recently working on Pixel Perfect extension that allows web designers to overlay a page with semi transparent image and tweak the page HTML/CSS with per pixel precision - till it's matching the overlay.

This extension hasn't been working for several years (not maintained) and since requested by many users Firebug Working Group (FWG) got the opportunity to build that again and on top of native Developer tools in Firefox.

We had two goals in mind when building the extension:

  • Make the Pixel Perfect feature available again
  • Show how to build a real world extension on top of native API and tools in Firefox

This post focuses on the internal architecture. There is another post if you rather interested in the feature itself.

Requirements

The extension is based on Add-on SDK as well as native platform API and we are also using JPM command line tool for building the final XPI.

There are several design decisions we made:

  • Support upcoming multiprocess browser (e10s)
  • Support remote devices (connected over RDP)
  • Use known web technologies to build the UI (ReactJS)

These decisions had obviously an impact on the internal architecture described below.

Multiprocess Browser (e10s)

There are already articles about upcoming multiprocess support in Firefox, so just briefly what this means for extensions => specifically for Pixel Perfect. The basic stone in this concept is that web page is running in different process than the rest of the browser (where rest of the browser includes also extensions).

  • Pixel Perfect 2 UI (the rest of the browser) is on the left side. It can't access the page content directly since it runs in different process (or it can even run on remote device, but more about that later).
  • Web Page is on the right side, it runs in its own content process. It's secure (one of the main points of e10s) since it isn't that simple to access it (not even for extension developers 😉

Pixel Perfect 2 (PP2), needs to properly cross the process boundary, setup messaging and deliver an image layer into the page content. The communication between processes is done using message managers that usually exchange JSON based packets with data.

Remote Debugging Protocol

Another challenge is making new features remotable. Don't worry, there is already API in place that allows implementing astounding things.

The user scenario for PP2 is as follows:

  • The user runs Developer Toolbox and PP2 on his desktop machine.
  • The Toolbox connects to an instance of Firefox running on a remote device.
  • The user picks a new layer (an image file) on his desktop.
  • The layer (image) appears inside loaded web page on the remote device.

Awesome, right? The user can tweak even a mobile device screens to pixel perfection!

PP2 needs to figure out yet a bit more to be remotable. There is not only a process boundary, but also a network boundary to cross. Let's see new and more detailed picture.

  • Pixel Perfect 2 UI runs on the client side (in the chrome process). This is the desktop machine in our use case. It communicates with a Front object (RDP terminology).
  • Front represents a proxy to the content process (local or remote). Executing methods on this object causes sending RDP packet cross processes and the network to the back-end (locale or remote device). All through RDP connection.
  • Actor is an object that receives packets from the Front object. The Actor already lives in a content process (in case of PP2), and so it has direct access to the web page. This object is responsible for rendering layers (images) within the page. Actor also sends messages back (e.g. when layers are dragged inside the page, to update coordinates on the client).
  • In-page Layer This is the image rendered over the page content. Note that images are not inserted directly into the page content DOM (that could be dangerous). They are rather rendered within a canvas that overlays the entire page. There are platform API for this and element highlighter (used by the Inspector panel) is also using this approach.

One thing to note, the RDP connection between Front and Actor crosses the network boundary as well as gets into a content process on the back-end automatically. It's also possible to create an Actor that lives in the chrome process on the back-end. But more about that in another post (let me know if you are interested).

User Interface

Pixel Perfect 2 UI consists from one floating popup window that allows layer registration. The window looks like as follows.

Btw. the popup can be opened by clicking on a button available in the main Firefox toolbar (and there is also a context menu with links to some online resources).

Implementing user interface is often hard and one of our goals was also showing how to use well known web technologies when building add-on. The popup window consists from one <iframe> element that loads standard HTML page bundled within the add-on package (xpi). The page is using RequireJS + ReactJS web stack to build the UI. Of course you can use any library you like to generate the markup.

There is yet another great thing. The frame is using content privileges only (type="content" and resource:// protocol for the content URL), not chrome privileges at all. It's safe just like any page loaded from the wild internet.

Architecture

Let's sum everything up and see the final picture linked with the actual source code.

The explanation goes from top to bottom (client side -> back-end) starting with the PP2 UI.

  • popup.html This is the Popup window. It consists from bunch of ReactJS templates (see the files in the same directory). The communication with PixelPerfectPopup object (pixel-perfect-popup.js) that lives in chrome content is done through message manager and JSON packets. Everything what lives in the data directory has content privileges. Stuff in lib directory has chrome privileges.
  • pixel-perfect-popup.js PixelPerfectPopup object is implemented in this module. It's responsible for communication with the popup window as well as communication with the back-end through PixelPerfectFront (pixel-perfect-front.js). If the user appends a new layer panel.html sends a new event to PixelPerfectPopup. It stores the layer in local store (a json file) and sends packet including an image data to the back-end PixelPerfectActor. The Actor gets access to the Canvas and renders the image.
  • pixel-perfect-store.js PixelPerfectStore is implemented in this file. It's responsible for layer persistence. All is stored inside a JSON file within the current browser profile directory.
  • pixel-perfect-front.js PixelPerfectFront is implemented in this file. It represents the proxy to the backend. The code is nice and simple, most of the stuff is handled by the RDP protocol automatically.
  • pixel-perfect-actor.js PixelPerfectActor is implemented in this file. This file (a module) is loaded and evaluated on the backend. So, carefully with module dependencies, the backend can be a mobile device. All necessary stuff need to be sent from the client (e.g. a stylesheet). The actor uses Anonymous Content API and renders the layer/image received from the client. It also sends events back to the client. E.g. if a layer is dragged within the page, it sends new coordinates to the PixelPerfectFront, then it's forwarded to PixelPerfectPopup and further to popup.htmlto update the final ReactJS template.
    If you are ReactJS fan, you'll love the code. The JSON packet received all the way from the back-end actor (crossing process, network and security boundaries) is finally passed to panel.setState(packet) method to automatically update the UI. Oh, yeah, pure pleasure for passionate developer 😉

That's it for now. The rest is in the source code (there are a lot of comments, ping me if you need some more).

 

We (Firebug Working Group) care a lot about extensibility of native developer tools in Firefox and as we are making progress on new generation of Firebug we are also building new extensible API on the platform. If you want to know more about how to build developer (or designer) tool extensions, stay tuned. The next post will start a fresh new tutorial: Extending Firefox Developer Tools

Resources

Jan 'Honza' Odvarko

]]>
http://www.softwareishard.com/blog/extension-architecture/pixel-perfect-2-developer-tool-extension-architecture/feed/ 3
Firebug Internals II. – Unified object rendering http://www.softwareishard.com/blog/firebug/firebug-internals-ii-unified-object-rendering/ Tue, 10 Jun 2014 15:20:54 +0000 http://www.softwareishard.com/blog/?p=964 Firebug 2 (released today!) uses number of internal architectural concepts that help to implement new features as well as effectively maintain the code base.

Using transparent architecture and well known design patterns has always been one of the key strategies of the (relatively small) Firebug team that allows us maintain rather large set of features in Firebug.

This post describes the way how Firebug deals with JavaScript object representation and the concept ensuring that an object is always rendered the same way across entire Firebug UI.

  • Firebug 2.0 is compatible with Firefox 30 - 32

 

See also list of new features in Firebug 2

Firebug Internals I.

Unified Object Rendering

Firebug (as a web developer tool) is primarily dealing with JS objects coming from the currently debugged page. All these objects are displayed to the user allowing further exploration and inspection. Important aspect of the rendering logic is that an object is always rendered using the same scheme (a template) across Firebug UI. It doesn't matter if the object is displayed in the Console panel, DOM panel or inside the Watch panel when Firebug is halted at a breakpoint. It always look the same and also offers the same set of actions (through the context menu).

Let's see an example. Following three images show how <body> element is displayed in different Firebug panels.

Here is <body> logged in the Console panel.

This screenshot displays <body> in the DOM panel.

And the last screenshot shows how it looks like in the Watch side panel.

The element is always rendered using the same template and also the context menu associated with the object offers the same basic actions (plus those related to the current context).

Architecture

The architecture behind unified rendering is relatively simple. The logic is based on a repository of templates where every template is associated with JS object type (number, string, etc.). When a panel needs to render an object it gets its type and asks the repository for a template that is associated with it. The template is consequently used to generate HTML markup.

Firebug uses Domplate engine fore templates, but any other templating system could be used instead.

  • An object (JS object coming from debugged page content) is logged into the Console panel.
  • The panel asks the repository to render the object.
  • Repository gets the right registered template for the object (done usually according to object's type).
  • Finally, the template renders itself using the original object as data.

Implementation

Let's yet see a few code examples that show how (simplified) implementation looks like from JavaScript perspective.

Here is how getTemplate can implemented (note that Firebug implementation is a bit different):

getTemplate: function(object)
{
    // Iterate registered templates and return the
    // one that supports given object
    for (var i=0; i<templates.length; i++) {
        var template = templates[i];
        if (template.supportsObject(object)
            return template;
    }
    return defaultTemplate;
}

An interface of a template object looks like as follows (again simplified).

var Template =
{
    className: "",

    supportsObject: function(object) { return false; },
    getContextMenuItems: function(object) { return []; },
    getTooltip: function(object) { return null; },
    highlightObject: function(object, context) {},
    inspectObject: function(object, context) { },
});

  • className Every template should have a classname so CSS styles can be associated.
  • supportsObject Used to pick the right template for an object
  • getContextMenuItems Used to get commands that are should be displayed in the context menu.
  • getTooltip Provides a text that is displayed in a tooltip.
  • highlightdObject Can be used to highlight the object within the page if mouse hovers over the object.
  • inspectObject Can be used for further inspection of the object (e.g. selecting the right target panel when the user clicks on the object).

See real repository of templates (a template in Firebug is called rep) on github.

Extension Points

The entire concept is also nicely extensible. This is great especially for extension (i.e. Mozilla add-ons) authors that can plug in into the logic and customize it.

  • Extensions can provide and register new templates that are rendering specific object types (coming e.g. from JS libraries like jQuery or EmberJS) and define how objects are rendered across entire UI.
  • Extensions can also provide a set of actions that can be performed on existing of custom object types.
  • Extensions can specify new CSS for existing templates and create custom themes.
]]>
Firebug Internals I. – Data Providers and Viewers http://www.softwareishard.com/blog/firebug/firebug-internals-i-data-providers-and-viewers/ http://www.softwareishard.com/blog/firebug/firebug-internals-i-data-providers-and-viewers/#comments Fri, 28 Mar 2014 13:35:35 +0000 http://www.softwareishard.com/blog/?p=920 One of the achievements of Firebug 2 alpha 1 release has been adoption of new JSD2 API and this task required significant changes and improvements in our code base. Among other things, we have also introduced a new concept that allows to nicely build asynchronously updated UI.

There are other concepts in Firebug 2 and this version is with no doubt the best one we have released. Try it and let us know how it works for you (Firefox 30+ needed).

In order to implement remote access to the server side debugger API, Firebug UI needs to know how to deal with asynchronous update. We applied Viewer Provider pattern and extended it with support for asynchronous data processing.

If you like using Document View, Model View Controller or similar design patterns to build your code base, you'll probably like Viewer Provider too.

So, follow this post if you are interested to know what Viewer Provider looks like.

Viewer Provider

This design pattern represents a concept of data providers that mediate data access through an unified interface. Providers are usually consumed by Views (or Viewers) that use them to query for data and asynchronously populate their content when results are available.

First let's see simpler, but related Document View pattern:

  • View is responsible for data rendering
  • Document represents a data source

The problem with this concept is that View needs to know the interface (API) of the Document. This makes it hard for the View to switch to another data source, in other words, it's hard to reuse the same View for other Documents.

An improvement of this simple concept is incorporating a Provider in between the Document and View. The provider knows the Document API and expose them in unified way to the Viewer.

  • Provider provides data through unified interface

There is typically one provider for one specific data source/document, but in complex application (like Firebug) there can be even a hierarchy of providers.

Having data providers implemented for various data sources means that existing viewers can easily consume any data and can be simply reused.

Here is how Provider interface looks like:

var Provider =
{
  hasChildren: function(object) { return this.getChildren().length > 0; },
  getChildren: function(object) { return []; },
  getLabel: function(object, col) { return ""; },
  getValue: function(object, col) { return null; },
}
  • hasChildren: used mostly by tree-style viewers that needs to know whether a twisty (+/- icons) should be displayed for specific item or not; It's implementation can be simple, as you can see in the code or optimized for various scenarios.
  • getChildren: returns a list of child objects for the given object
  • getLabel: returns a label for the given object; The label will be directly displayed within the UI (e.g. in a drop down list). The col argument can be used by UI widgets supporting tabular data display (several labels for given object/row).
  • getValue returns a value for the given object

Asynchronous Viewer Provider

One of the challenges when consuming data is support for asynchronous processing. Especially in case on web applications toady. If you need a data you send XHR and wait for the asynchronous response. Viewer and Provider pattern has a solution fort this too.

  • getChildren returns a Promise instead of a direct list of children. The promise is resolved asynchronously as soon as data is available.

The main difference is that getChildren returns a Promise. The solid line (on the image above) represents synchronous data querying, the dashed line represents asynchronous update. The promise object usually comes from the data source and is passed through the provider to the view. Of course, the update happens when queried data are available.

Online Demo

You can also check out a simple web application that shows how viewers and providers can be implemented.

The demo application implements the following objects:

  • Storage a simple data storage (a document) returning data asynchronously
  • Provider implemented for the Storage above
  • Viewer simple list viewer that is using the Provider above to access data.

The application's entry point is main.js

 

 
 

Read more about Data Provider and how they are implemented in Firebug 2.

]]>
http://www.softwareishard.com/blog/firebug/firebug-internals-i-data-providers-and-viewers/feed/ 2
Firebug 2: Support for dynamic scripts http://www.softwareishard.com/blog/firebug/firebug-2-support-for-dynamic-scripts/ http://www.softwareishard.com/blog/firebug/firebug-2-support-for-dynamic-scripts/#comments Thu, 27 Mar 2014 17:34:12 +0000 http://www.softwareishard.com/blog/?p=882 Firebug 2 (first alpha) has been released this week and it's time to checkout some of the new features. Note that you need at least Firefox 30 to run it.

This brand new version introduces a lot of changes where the most important one is probably the fact that it's based on new Firefox debugging engine known as JSD2.

Also Firebug UI has been polished to match Australis theme introduced in Firefox 29.


Dynamically Created Scripts

Let's see how debugging of dynamically created scripts has been improved in this release and how Firebug UI deals with this common task. We'll cover following dynamic scripts in this post:

  • Inline Event Handlers
  • Script Element Injection
  • Function Object

There are other ways how to create scripts dynamically.

 

Inline Event Handlers

Inline event handlers are little pieces of JavaScript placed within HTML attributes designed to handle basic events like onclick.

<button onclick="testFunction()">Click Me</button>

These scripts are compiled dynamically when needed (before executed for the first time). That's why they are considered dynamic and you don't have to see them in the Script location list (until compiled by the browser).

Script's URL is composed dynamically (there is no real URL for dynamic scripts) and event handlers scripts follow this scheme:

<element-name> <attribute-name> <button-label>

If you select the script from the script location menu, you should see the source that is placed within the onclick attribute.

Of course, you can create a breakpoint as usual. Try live example page if you have Firebug 2 installed.

 

Script Element Injection

Another way how to dynamically compile a piece of script is using <script> element injection.

var script =
        "var a = 10;\n" +
        "var b = 10;\n" +
        "console.log('a + b = %d', a + b);\n" +
        "//# sourceURL=injected-script.js\n";

var scriptTag = document.createElement("script");
scriptTag.textContent = script;
document.body.appendChild(scriptTag);

Again, you can check out live example.

There is couple of things to see:

  • There is one event handler script: button onclick Click Me since we injected the script through a button and its event handler.
  • There is another dynamic script injected-script.js - this one created using the injected <script> element
  • Injected script uses custom URL, which is defined within the source using sourceURL comment:
    //# sourceURL=injected-script.js

  • If no sourceURL is provided default one is generated (using script element id or xpath)

 

Function Object

Another way how to compile a script dynamically is using JavaScript native Function object.

var source = "console.log('a + b = %d', a + b);\n";
var myFunc = new Function("a", "b", source);
myFunc.displayName = "myFunc";

myFunc(10, 10);

  • The script URL is generated automatically. The following scheme is used: <parent-page-url> <line-number> "> Function";
  • You can't use sourceURL due to a platform bug
  • There is one event handler script

Check out live example.

 

We want to switch into beta phase soon and it would be great to hear from you about how this version is working for you.

 

]]>
http://www.softwareishard.com/blog/firebug/firebug-2-support-for-dynamic-scripts/feed/ 4
Firebug Tip: Resend HTTP Request http://www.softwareishard.com/blog/planet-mozilla/firebug-tip-resend-http-request/ http://www.softwareishard.com/blog/planet-mozilla/firebug-tip-resend-http-request/#comments Fri, 06 Sep 2013 10:28:37 +0000 http://www.softwareishard.com/blog/?p=844 There are many cases when web developer needs to resend an existing HTTP request (executed by the currently debugged page) and test the server back-end or perhaps even a specific web service.

Such action can be often repeated, and so the task should be simple and quick.

Firebug offers several ways how to resend HTTP request, read more if you are interested...

Resend Action

The first and obvious way is to use Resend action that is available in the Net and Console panel context menu. It's the simplest method, just right click on an HTTP request in the Net panel or on XHR log in the Console panel and pick the Resend menu item.

You should see a new request displayed. Both requests will be the same since Firebug preserves headers, posted data, etc.

You can use this test page to try it yourself.

The URL of the webservice is:

http://www.softwareishard.com/firebug/tips/resend/hello.php

...and the implementation looks like as follows:

<?php
if (isset($_POST["name"]))
  echo "Hello ".$_POST["name"]."!";
else
  echo "Hello!";
?>

Copy as cURL

You might prefer OS system command line and its cURL command line tool.

A simple example first. To get the response from our hello.php service you need to execute:

curl http://www.softwareishard.com/firebug/tips/resend/hello.php

In order to get the cURL command for an existing HTTP request, right click on the request in the Net or Console panel and pick Copy as cURL action. Firebug will copy it with all necessary arguments to the clipboard (preserving headers, etc.)

The result in our case is:

curl 'http://www.softwareishard.com/firebug/tips/resend/hello.php' -H 'Host: www.softwareishard.com' -H 'User-Agent: Mozilla/5.0 (Windows NT 6.0; rv:26.0) Gecko/20100101 Firefox/26.0' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Accept-Language: en-US,en;q=0.5' -H 'Accept-Encoding: gzip, deflate' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'Referer: http://www.softwareishard.com/firebug/tips/resend/resend.html' --data 'name=Bob'

Command Editor

The third option is using Firebug Command Editor and execute XHR with pure JavaScript. In this case you might specify headers and other options just as you like it.

Just open Firebug UI and select the Console panel on the page you are debugging.

 
 

  • Would you be interested to have another way to resend a request?
  • Would it be useful to have also Copy as Wget? If yes star this issue, so we know how high priority it is for us.

 

 
]]>
http://www.softwareishard.com/blog/planet-mozilla/firebug-tip-resend-http-request/feed/ 1
Firebug Tip: getEventListeners() command http://www.softwareishard.com/blog/planet-mozilla/firebug-tip-geteventlisteners-command/ http://www.softwareishard.com/blog/planet-mozilla/firebug-tip-geteventlisteners-command/#comments Mon, 02 Sep 2013 06:40:47 +0000 http://www.softwareishard.com/blog/?p=806 One of the new features introduced in Firebug 1.12 is a new Command Line command called:

getEventListeners()
 
The command returns all the event listeners registered for specific target. The target can be either an element, or another DOM object that accepts event listeners (e.g. window or an XMLHttpRequest).

Basic Scenario

Let's see how the basic usage of the getEventListeners() command looks like. First, here is a test page that registers one click listener for a testElement.

<!DOCTYPE html>
<html>
<head>
<title>getEventListeners()</title>
</head>
<body>
<div id="testElement">Click Me!</div>
<script>
function myClickListener()
{
    console.log("click");
}
var testElement = document.getElementById("testElement");
testElement.addEventListener("click", myClickListener, false);
</script>
</body>
</html>

The expression we are going to execute on Firebug's Command Line looks like as follows:

getEventListeners($("#testElement"))

It returns a descriptor object that is logged into the Console panel.

If you click the descriptor you'll be navigated to the DOM panel that allows further inspection.

As you can see, there is one click listener registered with the testElement element (the click field is an array containing all registered click listeners). Clicking the myClickListener function navigates you to the Script panel to see its source code and perhaps create a breakpoint for further debugging.

Using getEventListeners() in an expression

In some cases, we might want to reference the listener function directly in an expression:

getEventListeners($("#testElement")).click[0].listener

The expression returns directly the handler function that is logged into the Console panel. You'll be navigated to the Script panel directly if you click the return value.

You might also want to manually execute the listener function and e.g. break in the debugger in case you created a breakpoint inside the method.

getEventListeners($("#testElement")).click[0].listener()

Event Listeners & Closures

Some JavaScript libraries that implements API for event listener registration might register their own function and call the original listener through a closure. Let's see a simple example that demonstrates this technique.

observe(testElement, "click", function myClickHandler() {
    console.log("click");
});

function observe(element, eventType, handler) {
    function localHelper() {
        handler();
    }
    return element.addEventListener(eventType, localHelper, false);
}

Executing the following expression on the command line returns localHelper function since it's the registered event handler.

getEventListeners($("#testElement")).click[0].listener

If you want to log the original listener function myClickHandler - you need to get the handler argument that is accessed by localHelper closure. Next expression shows how variable inside a closure can be accessed (via: .% syntax).

getEventListeners($("#testElement")).click[0].listener.%handler

This expression returns reference to myClickHandler function.

 

You can read more about Closure Inspector on Firebug wiki.
You can also read wiki page about getEventListeners command.

 
]]>
http://www.softwareishard.com/blog/planet-mozilla/firebug-tip-geteventlisteners-command/feed/ 5
How to Start with Firebug Lite http://www.softwareishard.com/blog/planet-mozilla/how-to-start-with-firebug-lite/ http://www.softwareishard.com/blog/planet-mozilla/how-to-start-with-firebug-lite/#comments Wed, 21 Aug 2013 18:30:53 +0000 http://www.softwareishard.com/blog/?p=800 FirebugLite is lightweight version of Firebug (the Firefox extension) that does implement only a subset of features (mainly missing the Script and Net panel).

It's implemented as pure web application and running in all major browser.

Using Firebug lite is quick since it doesn't have to be installed (it's a web app) and it can also be injected into an existing page using a bookmarklet.

The next set of screenshots shows how Firebug Lite looks like in various browsers.

 

 

 

Let's see how you can run Firebug Lite within a web page. This post covers four scenarios:

  • Include using <script> element
  • Run through Bookmarklet
  • Firebug Lite on iPad
  • Run as Chrome Extension

Include using <script> element

Firebug Lite is pure JS application and so you can include it in your page just like any other JavaScript code. See an example:

<!DOCTYPE html>
<html>
<head>
  <title>Test</title>
  <script src="https://getfirebug.com/firebug-lite.js"
          type="text/javascript">
</script>
</head>
<body>
  <div style="color:green">Hello</div>
</body>
</html>

This approach is recommended in cases when you often inspect the same page and you want to have Firebug Lite UI always ready after page load (refresh). You can also download firebug-lite.js file and run it locally from your web server as follows:

<script type="text/javascript" src="/local/path/to/firebug-lite.js"></script>

Read more.

Run through Bookmarklet

You an also inject Firebug Lite into an existing page using the following bookmarklet.

javascript:(function(F,i,r,e,b,u,g,L,I,T,E){if(F.getElementById(b))return;E=F[i+'NS']&&F.documentElement.namespaceURI;E=E?F[i+'NS'](E,'script'):F[i]('script');E[r]('id',b);E[r]('src',I+g+T);E[r](b,u);(F[e]('head')[0]||F[e]('body')[0]).appendChild(E);E=new%20Image;E[r]('src',I+L);})(document,'createElement','setAttribute','getElementsByTagName','FirebugLite','4','firebug-lite.js','releases/lite/latest/skin/xp/sprite.png','https://getfirebug.com/','#startOpened');

Just drag this Firebug Lite Link into your Bookmarks toolbar. You can also click the link immediately to test Firebug Lite on this page.

This approach is recommended in cases when you use Firebug Lite for inspecting random pages.

Firebug Lite on iPad

One of the most interesting use cases is running Firebug Lite on mobile devices, especially tablets since they have bigger screen (Firebug Lite is not yet optimized for small screens).

Inspecting pages on mobile devices can be faster with Firebug Lite since you don't have to deal with remote debugging settings, setup connection with the PC, etc. All you need is to click a bookmarklet.

Read a post about how to create Firebug Lite Bookmarklet for iPad.

Run as Chrome Extension

Finally, you can install Firebug Lite as an extension into Google Chrome browser.

There are benefits over Firebug Lite bookmarlet:

  • Browser toolbar integration
  • Able to activate Firebug Lite for a particular domain
  • Firebug Lite will be loaded before all other scripts, allowing it to capture all console calls, and all XHR requests for that page
  • It is faster to load, because all code and images will be store in the extension's directory in your machine
  • Will be able to read external resources in the next version

Read more about Firebug Lite on Chrome.

Resources

If you are interested to contribute to the project you can start with reading Gal's post explaining what you can do. Gal Steinitz is the current Firebug Lite maintainer, so shoot any questions about what's coming up in his direction!

You might also be interested in what the future holds for Firebug Lite.

 
]]>
http://www.softwareishard.com/blog/planet-mozilla/how-to-start-with-firebug-lite/feed/ 3