Composer Monorepo Plugin (previously called Fiddler)

I have written about monorepos in this blog before, presented a talk about this topic and released a standalone tool called “Fiddler” that helps integrating Composer with a monolithic repository.

At the beginning of the year, somebody in #composer-dev IRC channel on Freenode pointed me in the direction of Composer plugins to use with Fiddler and it was an easy change to do so.

With the help of a new Composer v1.1 feature to add custom commands from a plugin, Fiddler is now “gone” and I renamed the repository to the practical beberlei/composer-monorepo-plugin package name on Github. After you install this plugin, you have the possibility to maintain subpackages and their dependencies in a single repository.

$ composer require beberlei/composer-monorepo-plugin

To use the plugin add monorepo.json files into each directory of a subpackage and use a format similar to the composer.json to add dependencies to a.) external composer packages that you have listed in your global Composer file b.) other subpackages in the current monorepo. See this example for a demonstration:

    "deps": [
    "autoload": {
        "psr-0": {"Bar": "src/"}

This subpackage here defined in a hypothetical file components/Bar/monorepo.json has dependencies to Symfony HTTP foundation and another subpackage Foo with its own components/Foo/monnorepo.json. Notice how we don’t need to specify versions (they are implicit) and import other dependencies using the relative path from the global composer.json.

The monorepo plugin is integrated with Composer, so every time you perform install, update or dump-autoload commands, the subpackages will be updated as well and each get their own autoloader that can be included from vendor/autoload.php relative to the subpackages root directory as usual.

How I use Wordpress with Git and Composer

I maintain two Wordpress blogs for my wife and wanted to find a workflow to develop, update, version-contol and maintain them with Git and Composer, like I am used to with everything else that I am working on.

The resulting process is a combination of several blog posts and my own additions, worthy of writing about for the next person interested in this topic.

It turns out this is quite simple if you re-arrange the Wordpress directory layout a little bit and use some fantastic open-source projects to combine Wordpress and Composer.

Initialize Repository

As a first step, create a new directory and git repository for your blog:

$ mkdir myblog
$ cd myblog
$ git init

Create a docroot directory that is publicly available for the webserver:

$ mkdir htdocs

Place the index.php file in it that delegates to Wordpress (installed later):

// htdocs/index.php
// Front to the WordPress application. This file doesn't do anything, but loads
// wp-blog-header.php which does and tells WordPress to load the theme.

define('WP_USE_THEMES', true);
require( dirname( __FILE__ ) . '/wordpress/wp-blog-header.php' );

Create the wp-content directory inside the docroot, it will be configured to live outside the Wordpress installation.

$ mkdir htdocs/wp-content -p

And then create a .gitignore file with the following ignore paths:


If you want to add a custom theme or plugin you need to use git add -f to force the ignored path into Git.

Don’t forget to include the uploads directory in your backup, when deploying this blog to production.

You directory tree should now look like this:

├── .git
├── .gitignore
└── htdocs
    ├── index.php
    └── wp-content

In the next step we will use Composer to install Wordpress and plugins.

Setup Composer

Several people have done amazing work to make Wordpress and all the plugins and themes on available through Composer. To utilize this work we create a composer.json file inside our repository root. There the file is outside of the webservers reach, users of your blog cannot download the composer.json.

    "require": {
        "ext-gd": "*",
        "wpackagist-plugin/easy-media-gallery": "1.3.*",
        "johnpbloch/wordpress-core-installer": "^0.2.1",
        "johnpbloch/wordpress": "^4.4"
    "extra": {
        "installer-paths": {
            "htdocs/wp-content/plugins/{$name}/": ["type:wordpress-plugin"],
            "htdocs/wp-content/themes/{$name}/": ["type:wordpress-theme"]
        "wordpress-install-dir": "htdocs/wordpress"
    "repositories": [
            "type": "composer",
            "url": ""

This Composer.json is using the execellent Wordpress Core Installer by John P. Bloch and the WPackagist project by Outlandish.

The extra configuration in the file configures Composer for placing Wordpress Core and all plugins in the correct directories. As you can see we put core into htdocs/wordpress and plugins into htdocs/wp-content/plugins.

Now run the Composer install command to see the intallation output similar to the next excerpt:

$ composer install
Loading composer repositories with package information
Installing dependencies (including require-dev)
  - Installing composer/installers (v1.0.23)
    Loading from cache

  - Installing johnpbloch/wordpress-core-installer (0.2.1)
    Loading from cache

  - Installing wpackagist-plugin/easy-media-gallery (1.3.93)
    Loading from cache

  - Installing johnpbloch/wordpress (4.4.2)
    Loading from cache

Writing lock file
Generating autoload files

The next step is to get Wordpress running using the Setup Wizard.

Setup Wordpress

Follow the Wordpress documentation to setup your Wordpress blog now, it will create the neccessary database tables and give you wp-config.php file to download. Copy this file to htdocs/wp-config.php and modify it slightly, it is necessary to adjust the WP_CONTENT_DIR, WP_CONTENT_URL and ABSPATH constants:


// generated contents of wp-config.php, salts, database and so on

define('WP_CONTENT_DIR',    __DIR__ . '/wp-content');
define('WP_CONTENT_URL',    WP_HOME . '/wp-content');

/** Absolute path to the WordPress directory. */
if ( !defined('ABSPATH') ) {
    define('ABSPATH', dirname(__FILE__) . '/wordpress');

/** Sets up WordPress vars and included files. */
require_once(ABSPATH . 'wp-settings.php');

Voila. You have Wordpress running from a Git repository and maintain the Wordpress Core and Plugins through Composer.

Different Development and Production Environments

The next step is introducing different environments, to allow using the same codebase in production and development, where the base urls are different, without having to change wp-config.php or the database.

Wordpress relies on the SITEURL and HOME configuration variables from the wp_options database table by default, this means its not easily possible to use the blog under http://myblog.local (development) and` (production).

But working on the blog I want to copy the database from production and have this running on my local development machine without anything more than exporting and importing a MySQL dump.

Luckily there is an easy workaround that allows this: You can overwrite the SITEURL and HOME variables using constants in wp-config.php.

For development I rely on the built-in PHP Webserver that is available since PHP 5.4 with a custom router-script (I found this on a blog a long time ago, but cannot find the source anymore):


$path = '/'.ltrim(parse_url($_SERVER['REQUEST_URI'])['path'],'/');

if(file_exists($root.$path)) {
    if(is_dir($root.$path) && substr($path,strlen($path) - 1, 1) !== '/') {
        $path = rtrim($path,'/').'/index.php';

    if(strpos($path,'.php') === false) {
        return false;
    } else {
        require_once $root.$path;
} else {
    include_once 'index.php';

To make your blog run flawlessly on your dev machine, open up htdocs/wp-config.php and add the following if statement to rewrite SITEURL and HOME config variables:

// htdocs/wp-config.php

// ... salts, DB user, password etc.

if (php_sapi_name() === 'cli-server' || php_sapi_name() === 'srv') {
    define('WP_ENV',        'development');
    define('WP_SITEURL',    'http://localhost:8000/wordpress');
    define('WP_HOME',       'http://localhost:8000');
} else {
    define('WP_ENV',        'production');
    define('WP_SITEURL',    'http://' . $_SERVER['SERVER_NAME'] . '/wordpress');
    define('WP_HOME',       'http://' . $_SERVER['SERVER_NAME']);

define('WP_DEBUG', WP_ENV === 'development');

You can now run your Wordpress blog locally using the following command-line arguments:

$ php -S localhost:8000 -t htdocs/ htdocs/router.php

Keep this command running and visit localhost:8000.

Monolithic Repositories with Composer and Relative Autoloading

Just was reminded on Twitter by Samuel that there is a way for monolithic PHP repositories with multiple components that I haven’t mentioned in my previous post.

It relies on a new composer.json for each component and uses the autoloading capabilities of Composer in a hackish way.

Assume we have two components located in components/foo and components/bar, then if bar depends on foo, it could define its components/bar/composer.json file as:

    "autoload": {
        "psr-0": {
            "Foo": "../foo/src/"

This approach is very simple to start with, however it has some downsides you must take into account:

  • you have to redefine dependencies in every composer.json that relies on another component.
  • if foo and bar depend on different versions of some third library baz that are not compatible, then composer will not realize this and your code will break at runtime.
  • if you want to generate deployable units (tarballs, debs, ..) then you will have a hard time to collect all the implicit dependencies by traversing the autoloader for relative definitions.
  • A full checkout has multiple vendor directories with a lot of duplicated code.

I think this approach is ok, if you are only sharing a small number of components that don’t define their own dependencies. The Fiddler approach however solves all these problems by forcing to rely on the same dependencies in a project globally and only once.

The ContainerTest

This is a short post before the weekend about testing in applications with dependency injection container (DIC). This solution helps me with a problem that I occasionally trip over in environments with large amounts of services connected through a DIC.

The problem is forgetting to adjust the DIC configuration when you add a new or remove a dependency to a service. This can easily slip through into production if you rely on your functional- and unit-tests to catch the problem.

I can avoid this problem by adding a functional test in my application that instantiate all the various services and checks if they are created correctly. The first time I saw this pattern was during development of some of the early Symfony2 bundles, most notably DoctrineBundle.


namespace Acme;

class ContainerTest extends \PHPUnit_Framework_TestCase
    use SymfonySetup;

    public static function dataServices()
        return array(
            array('AcmeDemoBundle.FooService', 'Acme\DemoBundle\Service\FooService'),
            array('AcmeDemoBundle.BarController', 'Acme\DemoBundle\Controller\BarController'),

     * @test
     * @dataProvider dataServices
    public function it_creates_service($id, $class)
        $service = $this->getContainer()->get($id);
        $this->assertInstanceOf($class, $service);

Whenever you create or modify a service check the ContainerTest if its already guarded by a test. Add a test if necesary and then make the change. It’s as easy as that.

The SymfonySetup trait provides access to the Symfony DIC using getContainer() as you can see in the test method. See my blog post on traits in tests for more information.

Monolithic Repositories with PHP and Composer

tl;dr Monolithic repositories can bring a lot of benefits. I prototyped Fiddler that complements Composer to add dependency management for monolithic repositories to PHP.

Thanks to Alexander for discussing this topic with me as well as reviewing the draft of this post.

As Git and Composer are more ubiquitous in open-source projects and within companies, monolithic repositories containing multiple projects have become a bit of a bad practice. This is a similar trend to how monolithic applications are out of fashion and the recent focus on microservices and Docker.

Composer has made it possible to create many small packages and distribute them easily through Packagist. This has massively improved the PHP ecosystem by increasing re-usability and sharing.

But it is important to consider package distribution and development seperate from each other. The current progress in package manager tooling comes at a cost for version control productivity, because Composer, NPM, Bower force you to have exactly one repository for one package to benefit from the reusability/distribution.

This blog post compares monolithic repositories with one repository per package approach. It focuses on internal projects and repositories in organizations and companies. I will discuss open source projects in a follow-up post.

Workflow at Facebook, Google, Twitter

The move towards smaller repositories is called into question by three extremely productive organizations that work at incredible scale.

  • Facebook mentioned in their talk “Big Code: Developer Infrastructure at Facebook’s Scale” that they are going to merge their three big code repositories Server, iOS and Android into a single big repository over the course of 2015.
  • Google open-sourced Bazel, the build tool behind a huge chunk of their codebase managed in a single Perforce repository with over 20 million commits (Reference).
  • Twitter, Foursquare and Square are working on their clone of Google’s Bazel build system called Pants. It is also designed for monolithic repositories.

All three companies cite huge developer productivity benefits, code-reusability, large-scale refactorings and development at scale for choosing this approach. The Facebook talk even mentions how all their development infrastructure efforts focus on keeping this workflow because of the benefits it brings.

Downsides of having many Repositories

In contrast working with ever smaller repositories can be a huge burden for developer mental models: I have seen this in open-source projects such as Doctrine and several customer projects:

  1. Cross repository changes require certain pull-requests on Github/Gitlab to be merged in order or in combination yet the tools don’t provide visibility into these dependencies. They are purely informal, leading to high error rates.
  2. Version pinning through NPM and Composer package managers is great for managing third party dependencies as long its not too many of them and they don’t change too often. For internal dependencies its a lot of work to update dependencies between repositories all the time. Time gets lost by developers that don’t have the correct dependencies or because of mistakes in the merge process.
  3. Changing code in core libraries can break dependencies without the developer even realizing this because tests don’t run together. This introduces a longer feedback cycle between code that depends on each other, with all the downsides.

One important remark about monolithic repositories: It does not automatically lead to a monolithic code-base. Especially Symfony2 and ZF2 are a very good example of how you can build individual components with a clean dependency graph in a single big repository.

At Qafoo we have always preferred monolithic project repositories containing several components over many small independent ones. We advised many customers to choose this approach except in some special cases where going small was economically more efficient.

Benefits of Monolithic Repositories

Even if you are not at the scale of Facebook or Google, a single repository still provides the mentioned benefits:

  • Adjusting to constant change by factoring out libraries, merging libraries and introducing new dependencies for multiple projects is much easier when done in a single, atomic VCS commit.
  • Discoverability of code is much higher, if you have all the code in a single place. Github and Gitlab don’t offer powerful tools like find, grep, sed over more than one repository. Hunting down dependencies, in specific versions can cost alot of time.
  • Reusability increases as it is much easier to just use code from the same repository than from another repository. Composer and NPM simplify combining repositories at specific versions, however one problem is actually knowing that the code exists in the first place.
  • From an operational perspective it is much easier to get a new developer up to speed setting up projects from a single repository. Just practically its easier to add his public key to only one Team/Repository/Directory than to hundreds. On top of that setting up many small repositories and familiarizing with each of them costs a lot of time.

This is why I have been struggling with how Packagist and Satis force the move to smaller repositories through the technical constraint “one repository equals one composer.json file”. For reusable open source projects this is perfectly fine, but for company projects I have seen it hurt developer productivity more often than is acceptable.

Introducing Fiddler

So today I prototyped a build system that complements Composer to manage multiple separate projects/packages in a single repository. I call it Fiddler. Fiddler introduces a maintainable approach to managing dependencies for multiple projects in a single repository, without losing the benefits of having explicit dependencies for each separate project.

In practice Fiddler allows you to manage all your third-party dependencies using a composer.json file, while adding a new way of managing your internal dependencies. It combines both external and internal packages to a single pool and allows you to pick them as dependencies for your projects.

For each project you add a fiddler.json file where you specify both your third-party and internal dependencies. Fiddler will take care of generating a specific autoloader for each project, containing only the dependencies of the project. This allows you to have one repository, while still having explicit dependencies per project.

Keeping explicit dependencies for each project means it’s still easy to find out which components are affected by changes in internal or third-party dependencies.

Example Project

Say you have three packages in your application, Library_1, Project_A and Project_B and both projects depend on the library which in turn depends on symfony/dependency-injection. The repository has the following file structure:

├── components
│   ├── Project_A
│   │   └── fiddler.json
│   ├── Project_B
│   │   └── fiddler.json
│   └── Library_1
│       └── fiddler.json
├── composer.json

The fiddler.json of Library_1 looks like this::

    "autoload": {"psr-0": {"Library1\\": "src/"}},
    "deps": ["vendor/symfony/dependency-injection"]

The fiddler.json of Project_A and Project_B look similar (except the autoload)::

    "autoload": {"psr-0": {"ProjectA\\": "src/"}},
    "deps": ["components/Library_1"]

The global composer.json as you would expect::

    "require": {
        "symfony/dependency-injection": "~2.6"

As you can see dependencies are specified without version constraints and as directory paths relative to the project root. Since everything is in one repository, all internal code is always versioned, tested and deployed together. Dropping the need for explicit versions when specifying internal dependencies.

With this setup you can now generate the autoloading files for each package exactly like Composer would by calling:

$ php fiddler.phar build
Building fiddler.json projects.
 [Build] components/Library_1
 [Build] components/Project_A
 [Build] components/Project_B

Now in each package you can require "vendor/autoload.php"; and it loads an autoloader with all the dependencies specified for each component, for example in components/Library_1/index.php


require_once "vendor/autoload.php";

$container = new Symfony\Component\DependencyInjection\ContainerBuilder;

This is an early access preview, please test this, provide feedback if you see this as a valuable or not and about possible extensions. See the README for more details about functionality and implementation details.

The code is very rough and simple right now, you will probably stumble accross some bugs, please report them. It is stable enough so that we could actually port Tideways to it already which is a multi package repository.

Integrate Symfony and Webpack

Asset Management in Symfony2 is handled with the PHP based library Assetic by default, however I have never really connected to this library and at least for me it usually wastes more time than it saves.

I am also not a big fan of the Node.JS based stack, because it tends to fail alot for me as well. With teams that primarily consist of PHP developers and web-designers the transition to use Node.JS tools should be very conservative in my opinion. Each team member should not feel overburdend by this new technology stack.

Frontend development is really not my strong suit, so these first steps I document here may seem obvious to some readers.

While researching about React.JS I came across a tool called Webpack which you could compare to Symfony’s Assetic. It is primarily focussing on bundling Javascript modules, but you can also ship CSS assets with it.

The real benefits for Webpack however are:

  1. the builtin support for AMD or CommonJS style module loaders
  2. a builtin development web-server that runs on a dedicated port, serving your combined assets.
  3. a hot reloading plugin that automatically refreshes either the full page or just selected code when the assets change.
  4. module loaders that allow instant translation of JSX or other languages with Javascript transpilers (CoffeeScript, ...)

Let’s have a look at a simple example javascript application in app.js requiring jQuery. The code is part of the Symfony2 document root in web/:


Then we can use AMD-style modules to resolve the dependencies in our code:

// app.js
define(['./vendor/jquery.js'], function($) {
    $(document).ready(function() {
        $("#content").html("Webpack Hello World!");

You can compare this to PHPs require() and autoloading functionality, something that Javascript has historically been lacking and usually leads to javascript files with many thousands lines of code. You can also use CommonJS-style module loading if your prefer this approach.

The downside of adding this functionality is that your code always has to run through Webpack to work on the browser. But Webpack solves this geniously by including a web-server that does the translation for you in the background all the time. With a little help of a configuration file called webpack.config.js

// webpack.config.js
module.exports = {
    entry   : "./web/js/app.js",
    output: {
        filename: "bundle.js",
        path : 'web/assets/',
        publicPath : '/assets/',

we can start our assets development server by calling:

$ webpack-dev-server --progress --colors --port 8090 --content-base=web/

This will start serving the combined javascript file at http://localhost:8090/assets/bundle.js as well as the asset page.css at http://localhost:8090/css/page.css by using the --content-base flag. Every change to any of the files that are part of the result will trigger a rebuild similar to the --watch flag of Assetic, Grunt or Gulp.

Webpack can be installed globally so it is easy to get started with. I find this a huge benefit not having to require a package.json and Node+npm workflow for your PHP/Symfony project.

$ sudo npm install -g webpack

For integration into Symfony we make use of some Framework configuration to change the base path used for the {{ asset() }} twig-function:

# app/config/config.yml
    assets_base_url: "%assets_base_url%"

# app/config/parameters.yml
  assets_base_url: "http://localhost:8090"

This adds a base path in front of all your assets pointing to the Webpack dev server.

The only thing left for integration is to load the javascript file from your twig layout file:

        <div id="content"></div>

        {% if app.environment == "dev" %}
        <script src="{{ asset('webpack-dev-server.js') }}"></script>
        {% endif %}
        <script type="text/javascript" src="{{ asset('assets/bundle.js') }}"></script>

The webpack-dev-server.js file loaded only in development environment handles the hot module reload exchanging, adding, or removing modules while an application is running without a page reload whenever possible.

For production use the assets_base_url parameter has to be adjusted to your specific needs and you use the webpack command to generate a minified and optimized version of your javascript code.

$ webpack
Hash: 69657874504a1a1db7cf
Version: webpack 1.6.0
Time: 329ms
    Asset   Size  Chunks             Chunk Names
bundle.js  30533       0  [emitted]  main
   [2] ./web/js/app.js 1608 {0} [built]
   [5] ./web/js/vendor/jquery.js 496 {0} [built]

It will be placed inside web/assets/bundle.js as specified by the output configuration in the Webpack configuration. Getting started in production is as easy as seting the assets base url to null and pushing the bundle.js to your production server.

I hope this example shows you some of the benefits of using Webpack over Assetic, Grunt or Gulp and the simplicity using it between development and production. While the example is Symfony2 related, the concepts apply to any kind of application.

Back to why I stumbled over Webpack in the first place: React.JS. I have been circling around React for a while with the impression that is extremly well-suited for frontend development. The problems I had with React where purely operation/workflow based:

  1. React encourages modular design of applications, something that you have to get working first using require.js for example.
  2. Differentation between development (refresh on modify) and production assets (minified).
  3. React uses a template language JSX that requires cross-compiling the *.jsx files they are written in into plain javascript files.

Now this blog post has already shown that Webpack solves points one and two, but it also solves the JSX Transformation with some extra configuration in webpack.config.js:

// webpack.config.js
module.exports = {
    entry: './web/js/app.jsx',
    output: {
        filename: 'bundle.js',
        path: 'web/assets/',
        publicPath: '/assets'
    module: {
        loaders: [
            { test: /\.jsx$/, loader: 'jsx-loader?insertPragma=React.DOM&harmony' }
    externals: {'react': 'React'},
    resolve: {extensions: ['', '.js', '.jsx']}

Now it is trivally easy to use React, just create a file with the *.jsx extension and Webpack will automatically load it through Facebooks JSX transformer before serving it as plain javascript. The only requirement is that you have to install the NPM package jsx-loader.

So far I have used webpack only for two playground projects, but I am very confident integrating it into some of my production projects now.

Vagrant, NFS and NPM

I have ranted about Node.JS and NPM on Twitter before, costing me lots of time, so I have to make up for this now and offer some solutions.

One problem I regularly have is the following: I have a Vagrant/Virtualbox using NFS and want to run NPM inside of that. Running it inside the box is necessary, because I don’t want everyone using the box have to setup the node stack.

However running npm install on an NFS share doesn’t work as per issue #3565 because a chmod fails and apparently from the ticket, this is not going to be fixed.

I finally got it working with a workaround script by Kevin Stone that mimics NPM, but moves the package.json to a temporary directory and then rsyncs its back:

# roles/nodejs/files/


DIR_NAME=`echo $PWD | $HASH_CMD | cut -f1 -d " "`

mkdir -p $TMP_DIR

pushd $TMP_DIR

ln -sf $ORIG_DIR/package.json
npm $1

# Can't use archive mode cause of the permissions
rsync --recursive --links --times node_modules $ORIG_DIR


Integrating this into my Ansible setup of the machine it looked like this:

# roles/nodejs/tasks/main.yml
# More tasks here before this...
- name: "Install npm workaround"
  copy: >

- name: "Install Global Dependencies"
  command: >
      /usr/local/bin/tmpnpm install -g {{ item }}
  with_items: global_packages

- name: "Install Package Dependencies"
  command: >
      /usr/local/bin/tmpnpm install
      chdir={{ item }}
  with_items: package_dirs

Where global_packages and package_dirs are specified from the outside when invoking the role:

# deploy.yml
- hosts: all
    - name: nodejs
        - grunt-cli
        - "/var/www/project"

This way the Ansible Node.JS role is reusable in different projects.

PHPunit @before Annotations and traits for code-reuse

I have written about why I think traits should be avoided. There is a practical use-case that serves me well however: Extending PHPUnit tests.

The PHPUnit TestCase is not very extendable except through inheritance. This often leads to a weird, deep inheritance hierachy in testsuites to achieve code reuse. For example the Doctrine ORM testsuite having OrmFunctionalTestCase extending from OrmTestCase extending from PHPUnits testcase.

Dependency Injection is something that is not possible easily in a PHPUnit testcase, but could be solved using an additional listener and some configuration in phpunit.xml.

This leaves traits as a simple mechanism that doesn’t require writing an extension for PHPUnit and allows “multiple inheritance” to compose different features for our test cases.

See this simple example that is adding some more assertions:


trait MyAssertions
    public function assertIsNotANumber($value)

class MathTest extends \PHPUnit_Framework_TestCase
    use MyAssertions;

    public function testIsNotANumber()

When you have more complex requirements, you might need the trait to implement setUp() method. This will prevent you from using multiple traits that all need to invoke setUp(). You could use the trait conflict resolution, but then the renamed setup methods do not get called anymore.

Fortunately PHPUnit 3.8+ comes to the rescue with new @before and @beforeClass annotations.

See this trait I use for making sure my database is using the most current database version by invoking migrations in @beforeClass


namespace Xhprof;

use Doctrine\DBAL\DriverManager;

trait DatabaseSetup
     * @var bool
    private static $initialized = false;

     * @beforeClass
    public static function initializeDatabase()
        if (self::$initialized) {

        self::$initialized = true;

        $conn = DriverManager::getConnection(array(
            'url' => $_SERVER['TEST_DATABASE_DSN']

        $dbDeploy = new DbDeploy($conn, realpath(__DIR__ . '/../../src/schema'));

I could mix this with a second trait SymfonySetup that makes the DIC container available for my integration tests:


namespace Xhprof;

trait SymfonySetup
    protected $kernel;
    protected $container;

     * @before
    protected function setupKernel()
        $this->kernel = $this->createKernel();

        $this->container = $this->kernel->getContainer();

    protected function createKernel(array $options = array())
        return new \AppKernel('test', true);

     * @after
    protected function tearDownSymfonyKernel()
        if (null !== $this->kernel) {

The Symfony setup trait uses @before and @after to setup and cleanup without clashing with the traditional PHPUnit setUp method.

Combining all this we could write a testcase like this:


class UserRepositoryTest extends \PHPUnit_Framework_TestCase
    use DatabaseSetup;
    use SymfonySetup;

    public function setUp()
        // do setup here

    public function testNotFindUserReturnsNull()
        $userRepository = $this->container->get('user_repository');
        $unusedId = 9999;
        $user = $userRepository->find($unusedId);

Sadly the @before calls are invoced after the original setup() method so we cannot access the Symfony container here already. Maybe it would be more practical to have it work the other way around. I have opened an issue on PHPUnit for that.

A case for weak type hints only in PHP7

TL;DR: I was one voice for having strict type hints until I tried the current patch. From both a library and application developer POV they don’t bring much to the table. I think PHP would be more consistent with weak type hints only.

These last weeks there have been tons of discussions about scalar type hints in PHP following Andrea Faulds RFC that is currently in voting. Most of them were limited to PHP Internals mailinglist but since the voting started some days ago much has also been said on Twitter and blogs.

This post is my completly subjective opinion on the issue.

I would have preferred strict type hints, however after trying the patch, I think that strict type hints

  • will cause considerable problems for application developers, forcing them to “replicate weak type hinting” by manually casting everywhere.
  • are useless for library developers, because they have to assume the user is in weak type mode.
  • are useless within a library because I already know the types at the public API, weak mode would suffice for all the lower layers of my library.

Neither group of developers gets a considerable benefit from the current RFCs strict mode.

The simple reason for this, request, console inputs and many databases provide us with strings, casting has to happen somewhere. Having strict type hints would not save us from this, type juggling and casting has to happen and PHP’s current approach is one of the main benefits of the language.

Real World Weak vs Strict Code Example

Lets look at an example of everyday framework code Full Code to support my case:


class UserController
    public function listAction(Request $request)
        $status = $request->get('status'); // this is a string

        return [
            'users' => $this->service->fetchUsers($status),
            'total' => $this->service->fetchTotalCount($status)

class UserService
    const STATUS_INACTIVE = 1;
    const STATUS_WAITING = 2;
    const STATUS_APPROVED = 3;

    private $connection;

    public function fetchUsers(int $status): array
        $sql = 'SELECT, u.username FROM users u WHERE u.status = ? LIMIT 10';

        return $this->connection->fetchAll($sql, [$status]);

    public function fetchTotalCount(int $status): int
        $sql = 'SELECT count(*) FROM users u WHERE u.status = ?';

        return $this->connection->fetchColumn($sql, [$status]); // returns a string

See how the code on UserService is guarded by scalar typehints to enforce having the right types inside the service:

  • $status is a flag to filter the result by and it is one of the integer constants, the type hint coerces an integer from the request string.
  • fetchTotalCount() returns an integer of total number of users matching the query, the type hint coerces an integer from the database string.

This code example only works with weak typehinting mode as described in the RFC.

Now lets enable strict type hinting to see how the code fails:

  • Passing the string status from the request to UserSerice methods is rejected, we need to cast status to integer.
  • Returning the integer from fetchTotalCount fails because the database returns a string number. We need to cast to integer.
Catchable fatal error: Argument 1 passed to UserService::fetchUsers() must
be of the type integer, string given, called in /tmp/hints.php on line 22
and defined in /tmp/hints.php on line 37

Catchable fatal error: Return value of UserService::fetchTotalCount() must
be of the type integer, string returned in /tmp/hints.php on line 48

The fix everybody would go for is casting to (int) manually:

public function listAction(Request $request)
    $status = (int)$request->get('status'); // this is a string

    return [
        'users' => $this->service->fetchUsers($status),
        'total' => $this->service->fetchTotalCount($status)


public function fetchTotalCount(int $status): int
    $sql = 'SELECT count(*) FROM users u WHERE u.status = ?';

    return (int)$this->connection->fetchColumn($sql, [$status]);

It feels to me that enabling strict mode completly defeats the purpose, because now we are forced to convert manually, reimplementing weak type hinting in our own code.

More important: We write code with casts already, the scalar type hints patch is not necessary for that! Only a superficial level of additional safety is gained, one additional check of something we already know is true!

Strict mode is useless for library developers, because I always have to assume weak mode anyways.

EDIT: I argued before that you have to check for casting strings to 0 when using weak typehints. That is not necessary. Passing fetchTotalCount("foo") will throw a catchable fatal error in weak mode already!

Do we need strict mode?

In a well designed application or library, the developer can already trust the types of his variables today, 95% of the time, without even having type hints, using carefully designed abstractions (example Symfony Forms and Doctrine ORM): No substantial win for her from having strict type hints.

In a badly designed application, the developer is uncertain about the types of variables. Using strict mode in this scenario she needs to start casting everywhere just to be sure. I cannot imagine the resulting code to look anything but bad. Strict would actually be counterproductive here.

I also see a danger here, that writing “strict mode” code will become a best practice and this might lead developers working on badly desigend applications to write even crappier code just to follow best practices.

As a pro strict mode developer I could argue:

  • that libraries such as Doctrine ORM and Symfony Forms already abstract all the nitty gritty casting from request or database today. But I don’t think that is valid: They are two of the most sophisticated PHP libraries out there, maybe used by 1-5% of the userbase. I don’t want to force this level of abstraction on all users. I can’t use this level myself all the time. Also if libraries already abstract this for us, why need to duplicate the checks again if we can trust the variables types?
  • that I might have complex (mathematical) algorithms that benefit from strict type hinting. But that is not really true: Once the variables have passed through the public API of my fully typehinted library I know the types and can rely on them on all lower levels. Weak or strict type hinting doesn’t make a difference anymore. Well designed libraries written in PHP5 already provide this kind of trust using carefully designed value objects and guard clauses.
  • that using strict type in my library reduce the likelihood of bugs, but that is not guaranteed. Users of my library can always decide not to use strict type hints and that requires me as a library author to consider this use-case and prevent possible problems. Again using strict mode doesn’t provide a benefit here.
  • to write parts of the code in strict and parts in weak mode. But how to decide this? Projects usually pick only one paradigm for good reason: E_STRICT compatible code yes or no for example. Switching is arbitrary and dangerously inconsistent. As a team lead I would reject such kind of convention because it is impractible. Code that follows this paradigm in strict languages such as Java and C# has an aweful lot of converting methods such as $connection->fetchColumnAsInteger(). I do not want to go down that road.

Would we benefit from only strict mode?

Supporters of strict mode only: Make sure to understand why this will never happen!

Say the current RFC gets rejected, would we benefit from a strict type hinting RFC? No, and the current RFC details the exact reasons why. Most notably for BC reasons all the PHP API will not use the new strict type hinting.

This current RFC is the only chance to get any kind of strict hinting into PHP. Yet with the limited usefullness as described before, we can agree that just having weak mode would be more consistent and therefore better for everyone.


I as PHP developer using frameworks and libraries that help me write type safe code today, strict typing appeals to me. But put to a test in real code it proves to be impractical for many cases, and not actually much more useful than weak type hinting in many other cases.

Weak types provide me with much of the type safety I need: In any given method, using only typehinted parameters and return values, I am safe from type juggling. As a library developer I have to assume caller uses weak mode all the time.

Having strict type hints suggests that we can somehow get rid of type juggling all together. But that is not true, because we still have to work with user input and databases.

The current RFC only introduced the extra strict mode because developers had a very negative reactions towards weak type hints. Strike me from this list, weak type hints are everything that PHP should have. I will go as far that others strict-typers would probably agree when actually working with the patch.

I would rather prefer just having weak types for now, this is already a big change for the language and would prove to be valuable for everyone.

I fear Strict mode will have no greater benefit than gamification of the language, the winner is the one with the highest percentage of strict mode code.

Running HHVM with a Webserver

I haven’t used HHVM yet because the use-case for the alternative PHP runtime didn’t came up. Today I was wondering if our Qafoo Profiler would run out of the box with HHVM using the builtin XHProf extension (Answer: It does).

For this experiment I wanted to run the wordpress blog of my wife on HHVM locally. It turns out this is not very easy with an existing LAMP stack, because mod-php5 and mod-fastcgi obviously compete for the execution of .php files.

Quick googling didn’t turn up a solution (there probably is one, hints in the comments are appreciated) and I didn’t want to install a Vagrant Box just for this. So I decided to turn this into a sunday side project. Requirements: A simple webserver that acts as proxy in front of HHVMs Fast-CGI. Think of it as the “builtin webserver” that HHVM is missing.

This turns out to be really simple with Go, a language I use a lot for small projects in the last months.

The code is very simple plumbing starting with a HTTP Server that accepts client requests, translates them to FastCGI requests, sending them to HHVM and then parsing the FastCGI Response to turn it into a HTTP Response.

As a PHP developer I am amazed how Go makes it easy to write this kind of infrastructure tooling. I prefer PHP for everything web related, but as I tried to explain in my talk at PHPBenelux last week, Go is a fantastic language to write small, self-contained infrastructure components (or Microservices if you want a buzzword).

Back to playing with HHVM, if you want to give your application a try with HHVM instead of ZendEngine PHP it boils down to installing a prebuilt HHVM package and then using my hhvm-serve command:

$ go get
$ hhvm-serve --document-root /var/www
Listening on http://localhost:8080
Document root is /var/www
Press Ctrl-C to quit.

The server passes all the necessary environment variables to HHVM so that catch-all front-controller scripts such as Wordpress index.php or Symfony’s app.php should just work.

If you don’t have a running Go Compiler setup this few lines should help you out on Ubuntu:

$ sudo apt-get install golang
$ GOPATH=~/go
$ mkdir -p ~/go/{src,bin,pkg}

You should put the $GOPATH and $PATH changes into your bashrc to make this a permanent solution.

Starting to run HHVM, a Wordpress installation is a good first candidate to check on, as I knew from HHVM team blog posts that Wordpress works. Using a simple siege based benchmark I was able to trigger the JIT compiler and the Profiler charts showed a nice performance boost minute after minute as HHVM replaces dynamic PHP with optimized (assembler?) code.

blog comments powered by Disqus