Whitewashing is the blog of Benjamin Eberlei. Benjamin works for Qafoo on the PHP Profiler Tideways and you can book him for consulting and trainings.

Follow me on twitter Subscribe to RSS My Github Profile

Monolithic Repositories with Composer and Relative Autoloading

Just was reminded on Twitter by Samuel that there is a way for monolithic PHP repositories with multiple components that I haven’t mentioned in my previous post.

It relies on a new composer.json for each component and uses the autoloading capabilities of Composer in a hackish way.

Assume we have two components located in components/foo and components/bar, then if bar depends on foo, it could define its components/bar/composer.json file as:

    "autoload": {
        "psr-0": {
            "Foo": "../foo/src/"

This approach is very simple to start with, however it has some downsides you must take into account:

  • you have to redefine dependencies in every composer.json that relies on another component.
  • if foo and bar depend on different versions of some third library baz that are not compatible, then composer will not realize this and your code will break at runtime.
  • if you want to generate deployable units (tarballs, debs, ..) then you will have a hard time to collect all the implicit dependencies by traversing the autoloader for relative definitions.
  • A full checkout has multiple vendor directories with a lot of duplicated code.

I think this approach is ok, if you are only sharing a small number of components that don’t define their own dependencies. The Fiddler approach however solves all these problems by forcing to rely on the same dependencies in a project globally and only once.

The ContainerTest

This is a short post before the weekend about testing in applications with dependency injection container (DIC). This solution helps me with a problem that I occasionally trip over in environments with large amounts of services connected through a DIC.

The problem is forgetting to adjust the DIC configuration when you add a new or remove a dependency to a service. This can easily slip through into production if you rely on your functional- and unit-tests to catch the problem.

I can avoid this problem by adding a functional test in my application that instantiate all the various services and checks if they are created correctly. The first time I saw this pattern was during development of some of the early Symfony2 bundles, most notably DoctrineBundle.


namespace Acme;

class ContainerTest extends \PHPUnit_Framework_TestCase
    use SymfonySetup;

    public static function dataServices()
        return array(
            array('AcmeDemoBundle.FooService', 'Acme\DemoBundle\Service\FooService'),
            array('AcmeDemoBundle.BarController', 'Acme\DemoBundle\Controller\BarController'),

     * @test
     * @dataProvider dataServices
    public function it_creates_service($id, $class)
        $service = $this->getContainer()->get($id);
        $this->assertInstanceOf($class, $service);

Whenever you create or modify a service check the ContainerTest if its already guarded by a test. Add a test if necesary and then make the change. It’s as easy as that.

The SymfonySetup trait provides access to the Symfony DIC using getContainer() as you can see in the test method. See my blog post on traits in tests for more information.

Monolithic Repositories with PHP and Composer

tl;dr Monolithic repositories can bring a lot of benefits. I prototyped Fiddler that complements Composer to add dependency management for monolithic repositories to PHP.

Thanks to Alexander for discussing this topic with me as well as reviewing the draft of this post.

As Git and Composer are more ubiquitous in open-source projects and within companies, monolithic repositories containing multiple projects have become a bit of a bad practice. This is a similar trend to how monolithic applications are out of fashion and the recent focus on microservices and Docker.

Composer has made it possible to create many small packages and distribute them easily through Packagist. This has massively improved the PHP ecosystem by increasing re-usability and sharing.

But it is important to consider package distribution and development seperate from each other. The current progress in package manager tooling comes at a cost for version control productivity, because Composer, NPM, Bower force you to have exactly one repository for one package to benefit from the reusability/distribution.

This blog post compares monolithic repositories with one repository per package approach. It focuses on internal projects and repositories in organizations and companies. I will discuss open source projects in a follow-up post.

Workflow at Facebook, Google, Twitter

The move towards smaller repositories is called into question by three extremely productive organizations that work at incredible scale.

  • Facebook mentioned in their talk “Big Code: Developer Infrastructure at Facebook’s Scale” that they are going to merge their three big code repositories Server, iOS and Android into a single big repository over the course of 2015.
  • Google open-sourced Bazel, the build tool behind a huge chunk of their codebase managed in a single Perforce repository with over 20 million commits (Reference).
  • Twitter, Foursquare and Square are working on their clone of Google’s Bazel build system called Pants. It is also designed for monolithic repositories.

All three companies cite huge developer productivity benefits, code-reusability, large-scale refactorings and development at scale for choosing this approach. The Facebook talk even mentions how all their development infrastructure efforts focus on keeping this workflow because of the benefits it brings.

Downsides of having many Repositories

In contrast working with ever smaller repositories can be a huge burden for developer mental models: I have seen this in open-source projects such as Doctrine and several customer projects:

  1. Cross repository changes require certain pull-requests on Github/Gitlab to be merged in order or in combination yet the tools don’t provide visibility into these dependencies. They are purely informal, leading to high error rates.
  2. Version pinning through NPM and Composer package managers is great for managing third party dependencies as long its not too many of them and they don’t change too often. For internal dependencies its a lot of work to update dependencies between repositories all the time. Time gets lost by developers that don’t have the correct dependencies or because of mistakes in the merge process.
  3. Changing code in core libraries can break dependencies without the developer even realizing this because tests don’t run together. This introduces a longer feedback cycle between code that depends on each other, with all the downsides.

One important remark about monolithic repositories: It does not automatically lead to a monolithic code-base. Especially Symfony2 and ZF2 are a very good example of how you can build individual components with a clean dependency graph in a single big repository.

At Qafoo we have always preferred monolithic project repositories containing several components over many small independent ones. We advised many customers to choose this approach except in some special cases where going small was economically more efficient.

Benefits of Monolithic Repositories

Even if you are not at the scale of Facebook or Google, a single repository still provides the mentioned benefits:

  • Adjusting to constant change by factoring out libraries, merging libraries and introducing new dependencies for multiple projects is much easier when done in a single, atomic VCS commit.
  • Discoverability of code is much higher, if you have all the code in a single place. Github and Gitlab don’t offer powerful tools like find, grep, sed over more than one repository. Hunting down dependencies, in specific versions can cost alot of time.
  • Reusability increases as it is much easier to just use code from the same repository than from another repository. Composer and NPM simplify combining repositories at specific versions, however one problem is actually knowing that the code exists in the first place.
  • From an operational perspective it is much easier to get a new developer up to speed setting up projects from a single repository. Just practically its easier to add his public key to only one Team/Repository/Directory than to hundreds. On top of that setting up many small repositories and familiarizing with each of them costs a lot of time.

This is why I have been struggling with how Packagist and Satis force the move to smaller repositories through the technical constraint “one repository equals one composer.json file”. For reusable open source projects this is perfectly fine, but for company projects I have seen it hurt developer productivity more often than is acceptable.

Introducing Fiddler

So today I prototyped a build system that complements Composer to manage multiple separate projects/packages in a single repository. I call it Fiddler. Fiddler introduces a maintainable approach to managing dependencies for multiple projects in a single repository, without losing the benefits of having explicit dependencies for each separate project.

In practice Fiddler allows you to manage all your third-party dependencies using a composer.json file, while adding a new way of managing your internal dependencies. It combines both external and internal packages to a single pool and allows you to pick them as dependencies for your projects.

For each project you add a fiddler.json file where you specify both your third-party and internal dependencies. Fiddler will take care of generating a specific autoloader for each project, containing only the dependencies of the project. This allows you to have one repository, while still having explicit dependencies per project.

Keeping explicit dependencies for each project means it’s still easy to find out which components are affected by changes in internal or third-party dependencies.

Example Project

Say you have three packages in your application, Library_1, Project_A and Project_B and both projects depend on the library which in turn depends on symfony/dependency-injection. The repository has the following file structure:

├── components
│   ├── Project_A
│   │   └── fiddler.json
│   ├── Project_B
│   │   └── fiddler.json
│   └── Library_1
│       └── fiddler.json
├── composer.json

The fiddler.json of Library_1 looks like this::

    "autoload": {"psr-0": {"Library1\\": "src/"}},
    "deps": ["vendor/symfony/dependency-injection"]

The fiddler.json of Project_A and Project_B look similar (except the autoload)::

    "autoload": {"psr-0": {"ProjectA\\": "src/"}},
    "deps": ["components/Library_1"]

The global composer.json as you would expect::

    "require": {
        "symfony/dependency-injection": "~2.6"

As you can see dependencies are specified without version constraints and as directory paths relative to the project root. Since everything is in one repository, all internal code is always versioned, tested and deployed together. Dropping the need for explicit versions when specifying internal dependencies.

With this setup you can now generate the autoloading files for each package exactly like Composer would by calling:

$ php fiddler.phar build
Building fiddler.json projects.
 [Build] components/Library_1
 [Build] components/Project_A
 [Build] components/Project_B

Now in each package you can require "vendor/autoload.php"; and it loads an autoloader with all the dependencies specified for each component, for example in components/Library_1/index.php


require_once "vendor/autoload.php";

$container = new Symfony\Component\DependencyInjection\ContainerBuilder;

This is an early access preview, please test this, provide feedback if you see this as a valuable or not and about possible extensions. See the README for more details about functionality and implementation details.

The code is very rough and simple right now, you will probably stumble accross some bugs, please report them. It is stable enough so that we could actually port Tideways to it already which is a multi package repository.

Integrate Symfony and Webpack

Asset Management in Symfony2 is handled with the PHP based library Assetic by default, however I have never really connected to this library and at least for me it usually wastes more time than it saves.

I am also not a big fan of the Node.JS based stack, because it tends to fail alot for me as well. With teams that primarily consist of PHP developers and web-designers the transition to use Node.JS tools should be very conservative in my opinion. Each team member should not feel overburdend by this new technology stack.

Frontend development is really not my strong suit, so these first steps I document here may seem obvious to some readers.

While researching about React.JS I came across a tool called Webpack which you could compare to Symfony’s Assetic. It is primarily focussing on bundling Javascript modules, but you can also ship CSS assets with it.

The real benefits for Webpack however are:

  1. the builtin support for AMD or CommonJS style module loaders
  2. a builtin development web-server that runs on a dedicated port, serving your combined assets.
  3. a hot reloading plugin that automatically refreshes either the full page or just selected code when the assets change.
  4. module loaders that allow instant translation of JSX or other languages with Javascript transpilers (CoffeeScript, ...)

Let’s have a look at a simple example javascript application in app.js requiring jQuery. The code is part of the Symfony2 document root in web/:


Then we can use AMD-style modules to resolve the dependencies in our code:

// app.js
define(['./vendor/jquery.js'], function($) {
    $(document).ready(function() {
        $("#content").html("Webpack Hello World!");

You can compare this to PHPs require() and autoloading functionality, something that Javascript has historically been lacking and usually leads to javascript files with many thousands lines of code. You can also use CommonJS-style module loading if your prefer this approach.

The downside of adding this functionality is that your code always has to run through Webpack to work on the browser. But Webpack solves this geniously by including a web-server that does the translation for you in the background all the time. With a little help of a configuration file called webpack.config.js

// webpack.config.js
module.exports = {
    entry   : "./web/js/app.js",
    output: {
        filename: "bundle.js",
        path : 'web/assets/',
        publicPath : '/assets/',

we can start our assets development server by calling:

$ webpack-dev-server --progress --colors --port 8090 --content-base=web/

This will start serving the combined javascript file at http://localhost:8090/assets/bundle.js as well as the asset page.css at http://localhost:8090/css/page.css by using the --content-base flag. Every change to any of the files that are part of the result will trigger a rebuild similar to the --watch flag of Assetic, Grunt or Gulp.

Webpack can be installed globally so it is easy to get started with. I find this a huge benefit not having to require a package.json and Node+npm workflow for your PHP/Symfony project.

$ sudo npm install -g webpack

For integration into Symfony we make use of some Framework configuration to change the base path used for the {{ asset() }} twig-function:

# app/config/config.yml
    assets_base_url: "%assets_base_url%"

# app/config/parameters.yml
  assets_base_url: "http://localhost:8090"

This adds a base path in front of all your assets pointing to the Webpack dev server.

The only thing left for integration is to load the javascript file from your twig layout file:

        <div id="content"></div>

        {% if app.environment == "dev" %}
        <script src="{{ asset('webpack-dev-server.js') }}"></script>
        {% endif %}
        <script type="text/javascript" src="{{ asset('assets/bundle.js') }}"></script>

The webpack-dev-server.js file loaded only in development environment handles the hot module reload exchanging, adding, or removing modules while an application is running without a page reload whenever possible.

For production use the assets_base_url parameter has to be adjusted to your specific needs and you use the webpack command to generate a minified and optimized version of your javascript code.

$ webpack
Hash: 69657874504a1a1db7cf
Version: webpack 1.6.0
Time: 329ms
    Asset   Size  Chunks             Chunk Names
bundle.js  30533       0  [emitted]  main
   [2] ./web/js/app.js 1608 {0} [built]
   [5] ./web/js/vendor/jquery.js 496 {0} [built]

It will be placed inside web/assets/bundle.js as specified by the output configuration in the Webpack configuration. Getting started in production is as easy as seting the assets base url to null and pushing the bundle.js to your production server.

I hope this example shows you some of the benefits of using Webpack over Assetic, Grunt or Gulp and the simplicity using it between development and production. While the example is Symfony2 related, the concepts apply to any kind of application.

Back to why I stumbled over Webpack in the first place: React.JS. I have been circling around React for a while with the impression that is extremly well-suited for frontend development. The problems I had with React where purely operation/workflow based:

  1. React encourages modular design of applications, something that you have to get working first using require.js for example.
  2. Differentation between development (refresh on modify) and production assets (minified).
  3. React uses a template language JSX that requires cross-compiling the *.jsx files they are written in into plain javascript files.

Now this blog post has already shown that Webpack solves points one and two, but it also solves the JSX Transformation with some extra configuration in webpack.config.js:

// webpack.config.js
module.exports = {
    entry: './web/js/app.jsx',
    output: {
        filename: 'bundle.js',
        path: 'web/assets/',
        publicPath: '/assets'
    module: {
        loaders: [
            { test: /\.jsx$/, loader: 'jsx-loader?insertPragma=React.DOM&harmony' }
    externals: {'react': 'React'},
    resolve: {extensions: ['', '.js', '.jsx']}

Now it is trivally easy to use React, just create a file with the *.jsx extension and Webpack will automatically load it through Facebooks JSX transformer before serving it as plain javascript. The only requirement is that you have to install the NPM package jsx-loader.

So far I have used webpack only for two playground projects, but I am very confident integrating it into some of my production projects now.

Vagrant, NFS and NPM

I have ranted about Node.JS and NPM on Twitter before, costing me lots of time, so I have to make up for this now and offer some solutions.

One problem I regularly have is the following: I have a Vagrant/Virtualbox using NFS and want to run NPM inside of that. Running it inside the box is necessary, because I don’t want everyone using the box have to setup the node stack.

However running npm install on an NFS share doesn’t work as per issue #3565 because a chmod fails and apparently from the ticket, this is not going to be fixed.

I finally got it working with a workaround script by Kevin Stone that mimics NPM, but moves the package.json to a temporary directory and then rsyncs its back:

# roles/nodejs/files/tmpnpm.sh


DIR_NAME=`echo $PWD | $HASH_CMD | cut -f1 -d " "`

mkdir -p $TMP_DIR

pushd $TMP_DIR

ln -sf $ORIG_DIR/package.json
npm $1

# Can't use archive mode cause of the permissions
rsync --recursive --links --times node_modules $ORIG_DIR


Integrating this into my Ansible setup of the machine it looked like this:

# roles/nodejs/tasks/main.yml
# More tasks here before this...
- name: "Install npm workaround"
  copy: >

- name: "Install Global Dependencies"
  command: >
      /usr/local/bin/tmpnpm install -g {{ item }}
  with_items: global_packages

- name: "Install Package Dependencies"
  command: >
      /usr/local/bin/tmpnpm install
      chdir={{ item }}
  with_items: package_dirs

Where global_packages and package_dirs are specified from the outside when invoking the role:

# deploy.yml
- hosts: all
    - name: nodejs
        - grunt-cli
        - "/var/www/project"

This way the Ansible Node.JS role is reusable in different projects.

PHPunit @before Annotations and traits for code-reuse

I have written about why I think traits should be avoided. There is a practical use-case that serves me well however: Extending PHPUnit tests.

The PHPUnit TestCase is not very extendable except through inheritance. This often leads to a weird, deep inheritance hierachy in testsuites to achieve code reuse. For example the Doctrine ORM testsuite having OrmFunctionalTestCase extending from OrmTestCase extending from PHPUnits testcase.

Dependency Injection is something that is not possible easily in a PHPUnit testcase, but could be solved using an additional listener and some configuration in phpunit.xml.

This leaves traits as a simple mechanism that doesn’t require writing an extension for PHPUnit and allows “multiple inheritance” to compose different features for our test cases.

See this simple example that is adding some more assertions:


trait MyAssertions
    public function assertIsNotANumber($value)

class MathTest extends \PHPUnit_Framework_TestCase
    use MyAssertions;

    public function testIsNotANumber()

When you have more complex requirements, you might need the trait to implement setUp() method. This will prevent you from using multiple traits that all need to invoke setUp(). You could use the trait conflict resolution, but then the renamed setup methods do not get called anymore.

Fortunately PHPUnit 3.8+ comes to the rescue with new @before and @beforeClass annotations.

See this trait I use for making sure my database is using the most current database version by invoking migrations in @beforeClass


namespace Xhprof;

use Doctrine\DBAL\DriverManager;

trait DatabaseSetup
     * @var bool
    private static $initialized = false;

     * @beforeClass
    public static function initializeDatabase()
        if (self::$initialized) {

        self::$initialized = true;

        $conn = DriverManager::getConnection(array(
            'url' => $_SERVER['TEST_DATABASE_DSN']

        $dbDeploy = new DbDeploy($conn, realpath(__DIR__ . '/../../src/schema'));

I could mix this with a second trait SymfonySetup that makes the DIC container available for my integration tests:


namespace Xhprof;

trait SymfonySetup
    protected $kernel;
    protected $container;

     * @before
    protected function setupKernel()
        $this->kernel = $this->createKernel();

        $this->container = $this->kernel->getContainer();

    protected function createKernel(array $options = array())
        return new \AppKernel('test', true);

     * @after
    protected function tearDownSymfonyKernel()
        if (null !== $this->kernel) {

The Symfony setup trait uses @before and @after to setup and cleanup without clashing with the traditional PHPUnit setUp method.

Combining all this we could write a testcase like this:


class UserRepositoryTest extends \PHPUnit_Framework_TestCase
    use DatabaseSetup;
    use SymfonySetup;

    public function setUp()
        // do setup here

    public function testNotFindUserReturnsNull()
        $userRepository = $this->container->get('user_repository');
        $unusedId = 9999;
        $user = $userRepository->find($unusedId);

Sadly the @before calls are invoced after the original setup() method so we cannot access the Symfony container here already. Maybe it would be more practical to have it work the other way around. I have opened an issue on PHPUnit for that.

A case for weak type hints only in PHP7

TL;DR: I was one voice for having strict type hints until I tried the current patch. From both a library and application developer POV they don’t bring much to the table. I think PHP would be more consistent with weak type hints only.

These last weeks there have been tons of discussions about scalar type hints in PHP following Andrea Faulds RFC that is currently in voting. Most of them were limited to PHP Internals mailinglist but since the voting started some days ago much has also been said on Twitter and blogs.

This post is my completly subjective opinion on the issue.

I would have preferred strict type hints, however after trying the patch, I think that strict type hints

  • will cause considerable problems for application developers, forcing them to “replicate weak type hinting” by manually casting everywhere.
  • are useless for library developers, because they have to assume the user is in weak type mode.
  • are useless within a library because I already know the types at the public API, weak mode would suffice for all the lower layers of my library.

Neither group of developers gets a considerable benefit from the current RFCs strict mode.

The simple reason for this, request, console inputs and many databases provide us with strings, casting has to happen somewhere. Having strict type hints would not save us from this, type juggling and casting has to happen and PHP’s current approach is one of the main benefits of the language.

Real World Weak vs Strict Code Example

Lets look at an example of everyday framework code Full Code to support my case:


class UserController
    public function listAction(Request $request)
        $status = $request->get('status'); // this is a string

        return [
            'users' => $this->service->fetchUsers($status),
            'total' => $this->service->fetchTotalCount($status)

class UserService
    const STATUS_INACTIVE = 1;
    const STATUS_WAITING = 2;
    const STATUS_APPROVED = 3;

    private $connection;

    public function fetchUsers(int $status): array
        $sql = 'SELECT u.id, u.username FROM users u WHERE u.status = ? LIMIT 10';

        return $this->connection->fetchAll($sql, [$status]);

    public function fetchTotalCount(int $status): int
        $sql = 'SELECT count(*) FROM users u WHERE u.status = ?';

        return $this->connection->fetchColumn($sql, [$status]); // returns a string

See how the code on UserService is guarded by scalar typehints to enforce having the right types inside the service:

  • $status is a flag to filter the result by and it is one of the integer constants, the type hint coerces an integer from the request string.
  • fetchTotalCount() returns an integer of total number of users matching the query, the type hint coerces an integer from the database string.

This code example only works with weak typehinting mode as described in the RFC.

Now lets enable strict type hinting to see how the code fails:

  • Passing the string status from the request to UserSerice methods is rejected, we need to cast status to integer.
  • Returning the integer from fetchTotalCount fails because the database returns a string number. We need to cast to integer.
Catchable fatal error: Argument 1 passed to UserService::fetchUsers() must
be of the type integer, string given, called in /tmp/hints.php on line 22
and defined in /tmp/hints.php on line 37

Catchable fatal error: Return value of UserService::fetchTotalCount() must
be of the type integer, string returned in /tmp/hints.php on line 48

The fix everybody would go for is casting to (int) manually:

public function listAction(Request $request)
    $status = (int)$request->get('status'); // this is a string

    return [
        'users' => $this->service->fetchUsers($status),
        'total' => $this->service->fetchTotalCount($status)


public function fetchTotalCount(int $status): int
    $sql = 'SELECT count(*) FROM users u WHERE u.status = ?';

    return (int)$this->connection->fetchColumn($sql, [$status]);

It feels to me that enabling strict mode completly defeats the purpose, because now we are forced to convert manually, reimplementing weak type hinting in our own code.

More important: We write code with casts already, the scalar type hints patch is not necessary for that! Only a superficial level of additional safety is gained, one additional check of something we already know is true!

Strict mode is useless for library developers, because I always have to assume weak mode anyways.

EDIT: I argued before that you have to check for casting strings to 0 when using weak typehints. That is not necessary. Passing fetchTotalCount("foo") will throw a catchable fatal error in weak mode already!

Do we need strict mode?

In a well designed application or library, the developer can already trust the types of his variables today, 95% of the time, without even having type hints, using carefully designed abstractions (example Symfony Forms and Doctrine ORM): No substantial win for her from having strict type hints.

In a badly designed application, the developer is uncertain about the types of variables. Using strict mode in this scenario she needs to start casting everywhere just to be sure. I cannot imagine the resulting code to look anything but bad. Strict would actually be counterproductive here.

I also see a danger here, that writing “strict mode” code will become a best practice and this might lead developers working on badly desigend applications to write even crappier code just to follow best practices.

As a pro strict mode developer I could argue:

  • that libraries such as Doctrine ORM and Symfony Forms already abstract all the nitty gritty casting from request or database today. But I don’t think that is valid: They are two of the most sophisticated PHP libraries out there, maybe used by 1-5% of the userbase. I don’t want to force this level of abstraction on all users. I can’t use this level myself all the time. Also if libraries already abstract this for us, why need to duplicate the checks again if we can trust the variables types?
  • that I might have complex (mathematical) algorithms that benefit from strict type hinting. But that is not really true: Once the variables have passed through the public API of my fully typehinted library I know the types and can rely on them on all lower levels. Weak or strict type hinting doesn’t make a difference anymore. Well designed libraries written in PHP5 already provide this kind of trust using carefully designed value objects and guard clauses.
  • that using strict type in my library reduce the likelihood of bugs, but that is not guaranteed. Users of my library can always decide not to use strict type hints and that requires me as a library author to consider this use-case and prevent possible problems. Again using strict mode doesn’t provide a benefit here.
  • to write parts of the code in strict and parts in weak mode. But how to decide this? Projects usually pick only one paradigm for good reason: E_STRICT compatible code yes or no for example. Switching is arbitrary and dangerously inconsistent. As a team lead I would reject such kind of convention because it is impractible. Code that follows this paradigm in strict languages such as Java and C# has an aweful lot of converting methods such as $connection->fetchColumnAsInteger(). I do not want to go down that road.

Would we benefit from only strict mode?

Supporters of strict mode only: Make sure to understand why this will never happen!

Say the current RFC gets rejected, would we benefit from a strict type hinting RFC? No, and the current RFC details the exact reasons why. Most notably for BC reasons all the PHP API will not use the new strict type hinting.

This current RFC is the only chance to get any kind of strict hinting into PHP. Yet with the limited usefullness as described before, we can agree that just having weak mode would be more consistent and therefore better for everyone.


I as PHP developer using frameworks and libraries that help me write type safe code today, strict typing appeals to me. But put to a test in real code it proves to be impractical for many cases, and not actually much more useful than weak type hinting in many other cases.

Weak types provide me with much of the type safety I need: In any given method, using only typehinted parameters and return values, I am safe from type juggling. As a library developer I have to assume caller uses weak mode all the time.

Having strict type hints suggests that we can somehow get rid of type juggling all together. But that is not true, because we still have to work with user input and databases.

The current RFC only introduced the extra strict mode because developers had a very negative reactions towards weak type hints. Strike me from this list, weak type hints are everything that PHP should have. I will go as far that others strict-typers would probably agree when actually working with the patch.

I would rather prefer just having weak types for now, this is already a big change for the language and would prove to be valuable for everyone.

I fear Strict mode will have no greater benefit than gamification of the language, the winner is the one with the highest percentage of strict mode code.

Running HHVM with a Webserver

I haven’t used HHVM yet because the use-case for the alternative PHP runtime didn’t came up. Today I was wondering if our Qafoo Profiler would run out of the box with HHVM using the builtin XHProf extension (Answer: It does).

For this experiment I wanted to run the wordpress blog of my wife on HHVM locally. It turns out this is not very easy with an existing LAMP stack, because mod-php5 and mod-fastcgi obviously compete for the execution of .php files.

Quick googling didn’t turn up a solution (there probably is one, hints in the comments are appreciated) and I didn’t want to install a Vagrant Box just for this. So I decided to turn this into a sunday side project. Requirements: A simple webserver that acts as proxy in front of HHVMs Fast-CGI. Think of it as the “builtin webserver” that HHVM is missing.

This turns out to be really simple with Go, a language I use a lot for small projects in the last months.

The code is very simple plumbing starting with a HTTP Server that accepts client requests, translates them to FastCGI requests, sending them to HHVM and then parsing the FastCGI Response to turn it into a HTTP Response.

As a PHP developer I am amazed how Go makes it easy to write this kind of infrastructure tooling. I prefer PHP for everything web related, but as I tried to explain in my talk at PHPBenelux last week, Go is a fantastic language to write small, self-contained infrastructure components (or Microservices if you want a buzzword).

Back to playing with HHVM, if you want to give your application a try with HHVM instead of ZendEngine PHP it boils down to installing a prebuilt HHVM package and then using my hhvm-serve command:

$ go get github.com/beberlei/hhvm-serve
$ hhvm-serve --document-root /var/www
Listening on http://localhost:8080
Document root is /var/www
Press Ctrl-C to quit.

The server passes all the necessary environment variables to HHVM so that catch-all front-controller scripts such as Wordpress index.php or Symfony’s app.php should just work.

If you don’t have a running Go Compiler setup this few lines should help you out on Ubuntu:

$ sudo apt-get install golang
$ GOPATH=~/go
$ mkdir -p ~/go/{src,bin,pkg}

You should put the $GOPATH and $PATH changes into your bashrc to make this a permanent solution.

Starting to run HHVM, a Wordpress installation is a good first candidate to check on, as I knew from HHVM team blog posts that Wordpress works. Using a simple siege based benchmark I was able to trigger the JIT compiler and the Profiler charts showed a nice performance boost minute after minute as HHVM replaces dynamic PHP with optimized (assembler?) code.

Symfony All The Things (Web)

My Symfony Hello World post introduced the smallest possible example of a Symfony application. Using this in trainings helps the participants understand of just how few parts a Symfony application contains. Sure, there are lots of classes participating under the hood, but I don’t care about the internals only about the public API.

We use microservice architectures for the bepado and PHP Profiler projects that Qafoo is working on at the monent. For the different components a mix of Symfony Framework, Silex, Symfony Components and our own Rest-Microframework (RMF) are used. This zoo of different solutions sparked a recent discussion with my colleague Manuel about when we would want to use Symfony for a web application.

We quickly agreed on: Always. I can’t speak for Manuel, but these are my reasons for this decision:

  • I always want to use a technology that is based on Symfony HttpKernel, because of the built-in caching, ESI and the Stack-PHP project. I usually don’t need this at the beginning of a project, but at some point the simplicity of extending the Kernel through aggregation is incredible.

    This leaves three solutions: Silex, Symfony Framework and Plain Components.

  • I want a well documented and standardized solution. We are working with a big team on bepado, often rotating team members for just some weeks or months.

    We can count the hours lost for developers when they have to start learning a new stack again. Knowing where to put routes, controllers, templates, configuration et al is important to make time for the real tasks.

    This leaves Symfony Framework and Silex. Everything built with the components is always custom and therefore not documented well enough.

  • I want a stable and extendable solution. Even when you just use Symfony for a very small component you typically need to interface with the outside world: OAuth, REST-API, HTTP Clients, Databases (SQL and NoSQL). There is (always) a bundle for that in Symfony.

    Yes, Silex typically has a copy-cat Provider for their own DIC system, but it is usually missing some configuration option or advanced use-case. In some cases its just missing something as simple as a WebDebug Toolbar integration that the Symfony Bundle has.

    My experience with Silex has been that its always several steps behind Symfony in terms of reusable functionality. One other downside with Silex in my opinion is its missing support for DIC and Route caching. Once your Silex application grows beyond its initial scope it starts to slow down.

  • I want just one solution if its flexible enough.

    It is great to have so many options, but that is also a curse. Lukas points out he is picking between Laravel, Silex or Symfony depending on the application use-case.

    But the web technology stack is already complex enough in my opionion. I rather have my developers learn and use different storage/queue or frontend technologies than have them juggle between three frameworks. If my experience with Symfony in the last 4 years taught me anything: Hands-on exposure with a single framework for that long leads to impressive productivity.

    And Symfony is flexible. The Dependency Injection based approach combined with the very balanced decoupling through bundles allows you to cherry-pick only what you need for every application: APIs, RAD, Large Teams. Everything is possible.

The analysis is obviously biased because of my previous exposure to the framework. The productivity gains are possible with any framework as long as it has a flourishing ecosystem. For anyone else this reasoning can end up to choose Laravel, Silex or Zend Framework 2.

So what is the minimal Symfony distribution that would be a starting point. Extending on the Symfony Hello World post:

  1. composer.json
  2. index.php file
  3. A minimal AppKernel
  4. A minimal config.yml file
  5. routing files
  6. A console script
  7. A minimal application bundle

You can find all the code on Github.

Start with the composer.json:

    "require": {
        "symfony/symfony": "@stable",
        "symfony/monolog-bundle": "@stable",
        "vlucas/phpdotenv": "@stable"
    "autoload": {
        "psr-0": { "Acme": "src/" }

The index.php:

// web/index.php

require_once __DIR__ . "/../vendor/autoload.php";
require_once __DIR__ . "/../app/AppKernel.php";

use Symfony\Component\HttpFoundation\Request;

Dotenv::load(__DIR__ . '/../');

$request = Request::createFromGlobals();
$kernel = new AppKernel($_SERVER['SYMFONY_ENV'], (bool)$_SERVER['SYMFONY_DEBUG']);
$response = $kernel->handle($request);
$kernel->terminate($request, $response);

We are using the package vlucas/phpdotenv to add Twelve Factor app compatibility, simplyfing configuration. This allows us to get rid of the different front controller files based on environment. We need a file called .env in our application root containing key-value pairs of environment variables:

# .env

Add this file to .gitignore. Your deployment to production needs a mechanism to generate this file with production configuration.

Our minimal AppKernel looks like this:

// app/AppKernel.php

use Symfony\Component\HttpKernel\Kernel;
use Symfony\Component\Config\Loader\LoaderInterface;

class AppKernel extends Kernel
    public function registerBundles()
        $bundles = array(
            new Symfony\Bundle\FrameworkBundle\FrameworkBundle(),
            new Symfony\Bundle\TwigBundle\TwigBundle(),
            new Symfony\Bundle\MonologBundle\MonologBundle(),
            new Acme\HelloBundle\AcmeHelloBundle()

        if (in_array($this->getEnvironment(), array('dev', 'test'))) {
            $bundles[] = new Symfony\Bundle\WebProfilerBundle\WebProfilerBundle();

        return $bundles;

    public function registerContainerConfiguration(LoaderInterface $loader)
        $loader->load(__DIR__ . '/config/config.yml');

        if (in_array($this->getEnvironment(), array('dev', 'test'))) {
            $loader->load(function ($container) {
                $container->loadFromExtension('web_profiler', array(
                    'toolbar' => true,

It points to a configuration file config.yml. We don’t use different configuration files per environment here because we don’t need it. Instead we use the closure loader to enable the web debug toolbar when we are in development environment.

Symfony configuration becomes much simpler if we don’t use the inheritance and load everything from just a single file:

# app/config/config.yml
    secret: %secret%
        resource: "%kernel.root_dir%/config/routing_%kernel.environment%.yml"
        strict_requirements: %kernel.debug%
        engines: ['twig']
        enabled: %kernel.debug%

            type:         fingers_crossed
            action_level: %monolog_action_level%
            handler:      nested
            type:  stream
            path:  "%kernel.logs_dir%/%kernel.environment%.log"
            level: debug

We can set the parameter values for %secret% and %monolog_action_level% by adding new lines to .env file, making use of the excellent external configuration parameter support in Symfony.

# .env

Now add a routing_prod.yml file with a hello world route:

# app/config/routing_prod.yml
    pattern: /hello/{name}
        _controller: "AcmeHelloBundle:Default:hello"

And because our routes are dependent on the environment in config.yml also a routing_dev.yml containing the WebDebug toolbar and profiler routes:

# app/config/routing_dev.yml
    resource: "@WebProfilerBundle/Resources/config/routing/wdt.xml"
    prefix:   /_wdt

    resource: "@WebProfilerBundle/Resources/config/routing/profiler.xml"
    prefix:   /_profiler

    resource: routing_prod.yml

We now need a bundle AcmeHelloBundle that is referenced in routing.yml and in the AppKernel. When we follow Fabiens best practice about adding services, routes and templates into the app/config and app/Resources/views folders adding a bundle just requires the bundle class:

// src/Acme/HelloBundle/AcmeHelloBundle.php

namespace Acme\HelloBundle;

use Symfony\Component\HttpKernel\Bundle\Bundle;

class AcmeHelloBundle extends Bundle

And the controller that renders our Hello World:

// src/Acme/HelloBundle/Controller/DefaultController.php

namespace Acme\HelloBundle\Controller;

use Symfony\Bundle\FrameworkBundle\Controller\Controller;

class DefaultController extends Controller
    public function helloAction($name)
        return $this->render(
            array('name' => $name)

Now we only put a template into app/Resources:

{# app/Resources/AcmeHelloBundle/views/Default/hello.html.twig #}
Hello {{ name }}!

As a last requirement we need a console script to manage our Symfony application. We reuse the vlucas/phpdotenv integration here to load all the required environment variables:

#!/usr/bin/env php
// app/console


require_once __DIR__.'/../vendor/autoload.php';
require_once __DIR__.'/AppKernel.php';

use Symfony\Bundle\FrameworkBundle\Console\Application;
use Symfony\Component\Console\Input\ArgvInput;

Dotenv::load(__DIR__ . '/../');

$input = new ArgvInput();
$kernel = new AppKernel($_SERVER['SYMFONY_ENV'], (bool)$_SERVER['SYMFONY_DEBUG']);
$application = new Application($kernel);

Voila. The minimal Symfony distribution is done.

Start the php built in webserver to take a look

$ php -S localhost:8080 web/index.php

I personally like this simplicity of that, the only thing that annoys me are the two routing files that I need to conditionally load the web profiler routes and the closure loader for the web_profiler extension. I suppose the nicer approach would be a compiler pass that does all the magic behind the scenes.

From this minimal distribution you can:

  1. Add new services to app/config/config.yml.
  2. Add new routes to app/config/routing_prod.yml.
  3. Add controllers into new bundles and templates into app/Resources.
  4. Add third party bundles or Stack-PHP implementations when you need existing, reusable functionality such as OAuth, Databases etc.
  5. Add configuration variables to .env file instead of using the app/config/parameters.yml approach.

This scales well, because at every point you can move towards abstracting bundles and configuration more using Symfony’s built in functionality. No matter what type of application you build, it is always based on Symfony and the building blocks are always the same.

I suggest to combine this minimal Symfony with the QafooLabsFrameworkExtraBundle that I blogged about two weeks ago. Not only will the Symfony be lightweight also your controllers. You can built anything on top this foundation from simple CRUD, APIs, hexagonal- or CQRS-architextures.

Lightweight Symfony2 Controllers

For quite some time I have been experimenting how to best implement Symfony 2 controllers to avoid depending on the framework. I have discussed many of these insights here in my blog.

There are three reasons for my quest:

Simplicity: Solutions to avoid the dependencies between framework and your model typically introduce layers of abstraction that produce complexity. Service layers, CQRS and various design patterns are useful tools, but developing every application with this kind of abstraction screams over-engineering.

While the Domain-Driven Design slogan is “Tackling complexity in software”, there are many abstractions out there that can better be described as “Causing complexity in software”. I have written some of them myself.

Testability: There is a mantra “Don’t unit-test your controllers” that arose because controllers in most frameworks are just not testable. They have many dependencies on other framework classes and cannot be created in a test environment. This lead many teams to use slow and brittle integration tests instead.

But what if controllers were testable because they don’t depend on the framework anymore. We could avoid testing all the many layers that we have removed for the sake of simplicity and also reduce the number of slow integration tests.

Refactorability: I found that when using service layer or CQRS, there is a tendency to use them for every use-case, because the abstraction is in place. Any use-case that is not written with those patterns is coupled against the framework again. Both development approaches are very different and refactoring from one to the other typically requires a rewrite.

A good solution should allow refactoring from a lightweight controller to a service layer with a small number of extract method and extract class refactorings.

While working on the Qafoo PHP Profiler product I went to work on a solution that allowed for Simplicity, Testability and Refactorability and came up with the NoFrameworkBundle.

The design of the bundle is careful to extend Symfony in a way that is easy for Symfony developers to understand. To achieve this it heavily extends upon the FrameworkExtraBundle that is bundled with Symfony.

The design goals are:

  • Favour Controller as Services to decouple them from the Framework.
  • Replace every functionality of the Symfony Base controller in a way that does not require injecting a service into your controller.
  • Never fetch state from services and inject it into the controller instead.
  • Avoid annotations

The concepts are best explained by showing an example:


use QafooLabs\MVC\TokenContext;

class TaskController
    private $taskRepository;

    public function __construct(TaskRepository $taskRepository)
        $this->taskRepository = $taskRepository;

    public function showAction(TokenContext $context, Task $task)
        if (!$context->isGranted('ROLE_TASK', $task)) {
            throw new AccessDeniedHttpException();

        return array('task' => $task);

This example demos the following features:

  • The TokenContext wraps access to the security.context service and is used for checking access permissions and retrieving the current User object. It is passed to the controller with the help of ParamConverter feature.

    TokenContext here is just an interface and for testing you can use a very simple mock implementation to pass an authenticated user to your controller.

  • View parameters are returned from the controller as an array, however without requiring the @Template annotation of the SensioFrameworkExtraBundle.

The next example demontrates the abstraction for form requests that help writing very concise form code:


use QafooLabs\MVC\TokenContext;
use QafooLabs\MVC\RedirectRouteResponse;
use QafooLabs\MVC\FormRequest;

class TaskController
    private $taskRepository;

    public function newAction(FormRequest $formRequest, TokenContext $context)
        $task = new Task($context->getUser());

        if ($formRequest->handle(new TaskType(), $task)) {

            return new RedirectRouteResponse('Task.show', array('id' => $task->getId()));

        return array('form' => $formRequest->createFormView());
  • The RedirectRouteResponse is used to redirect to a route without a need for the router service.

  • Usage of the FormRequest object that is a wrapper around FormFactory and Request object. It is passed by using a ParamConverter. The method $formRequest->handle combines binding the request and checking for valid data.

    Again there is a set of mock form request that allow you to simulate valid or invalid form requests for testing.

Writing controllers in this way addresses my requirements Simplicity, Testability and Refactorability. For simple CRUD controllers they only ever need access to a repository service. If one of your controllers grows too big, just refactor out its business logic into services and inject them.

Check out the repository on Github for some more features that we are using the Profiler.

Update 1: Renamed FrameworkContext to TokenContext as done in new 2.0 version of the bundle.

blog comments powered by Disqus