Using xdebug with wp-cli

Sometimes when you want to profile something in WordPress using xdebug, it’s not quite as straightforward as just enabling the tool in php.ini and passing a query string parameter. Say for example that you’re wanting to test how expensive it is to delete a term in WordPress. Well, you can load the term page with the query string parameter in place, but then you still wind up posting data without the query string parameter. You have a few options that I know of:

  1. Turn on xdebug for every request and figure out which cachegrind file contains the bits you actually want to profile.
  2. Hack the code to add the query string to your post.
  3. Write a cookie that xdebug can use in place of the query string variable.

None of these options especially appeal to me. They just seem clumsy.

I’ve used wp-cli more and more lately to interact with WordPress installs. And of course wp-cli has commands for interacting with terms. So it’d be neat to be able to profile wp-cli commands using xdebug. Luckily, it turns out to be pretty easy to do.

First, get xdebug and wp-cli set up. Links above will point you to documentation for doing so. When I set xdebug up, the extension was added to apache’s php config but not to the cli version of php.ini, so I had to copy the relevant line (extension=/path/to/xdebug.so) from one php.ini to the other.

Now you just have to add the -dxdebug.profiler_enable=1 option to your call to php at the command line. With wp-cli, you’re not calling php directly at the command line, though, so you have to figure out how to get the option passed through to the actual call. It turns out that the bash script that wraps wp-cli has a handy argument named $WP_CLI_PHP_ARGS that you can use to pass php arguments.

So if you always want to use xdebug with wp-cli, you can just do something like this in your .bash_profile:

export WP_CLI_PHP_ARGS=-dxdebug.profiler_enable=1

But maybe you don’t always want to profile wp-cli. Doing so slows operations down a little, and it generates pretty big cachegrind files, especially for complex or long-running operations (for example, a simple wp term delete command that takes around 2 seconds was generating 5MB cachegrind files for me). To run xdebug with wp-cli only when I really wanted to and without having to remember to set or unset $WP_CLI_PHP_ARGS when I wanted to toggle, I just added these lines to my .bash_profile:


alias wp="export WP_CLI_PHP_ARGS=; $HOME/.wp-cli/bin/wp"
alias wpd="export WP_CLI_PHP_ARGS=-dxdebug.profiler_enable=1; $HOME/.wp-cli/bin/wp"

Now, if I want to generate cachegrind files to profile, I execute wpd instead of wp, and life is good.

Custom Feed Links in WordPress

At a WordPress meetup tonight, the question arose of how to override default feed links for a WordPress site. For example, what if you’re using FeedBurner and want to just change the links in your source to the relevant FeedBurner links without hacking your theme? I don’t know if it’s theĀ best way, but it looks like this is pretty easily done with a plugin that, in its simplest form, looks like this:

[sourcecode lang=”php”]
<?php
function feedme_remove_feed_links() {
remove_theme_support( ‘automatic-feed-links’ );
}
add_action ( ‘after_setup_theme’, ‘feedme_remove_feed_links’, 11 );

function feedme_add_feed_links() {
?>
<!– CUSTOM FEEDS HERE –>
<?php
}
add_action( ‘wp_head’, ‘feedme_add_feed_links’ );
[/sourcecode]

Of course, you would need to make the feedme_add_feed_links() function do something a bit more useful, and in an ideal world, you’d provide an admin screen that allows people to specify their links.

One important detail that may not jump out at you is that when adding the “after_setup_theme” action, you need to give it a priority higher than 10. Else it just goes into a stack with all other of its sibling actions with the default priority and may be (in fact seems to be) overridden by one of them.

Stage

For a long time at my day job, one of our big web site issues has been the staging of database-driven content. Particularly if you’re editing Drupal pages that have a lot of markup in them, publishing a node can be sort of scary, as it goes live instantly with any bugs you’ve introduced. In theory, Drupal’s preview feature can be used to view your changes before you commit to them, but this too is scary, as the content isn’t rendered exactly as it will be once published. Further, using vanilla Drupal with its preview function to stage content requires that you roll out changes one by one. If you want to group changes for a mass rollout, the best you can do is wrap your changes in html comments and uncomment them one by one during deployment, hoping you don’t fat-finger anything in the process. I’ve always thought this would be a pretty difficult problem to solve, but yesterday, I came up with what feels like a satisfactory method for staging content.

The new stage module addresses both safety-netted staging of individual content and management of change sets.

It works by tapping into Drupal’s revision system, which already allows you to track changes to content over time and to revert to older content. For specified types of content, any additions or edits are published using the normal Drupal workflow, but on publish, the revision number is pinned at its last blessed point. You can edit or add any number of documents, and they all remain pinned at their pre-edit revision until you roll the whole batch of changes forward. When you roll a batch forward, all the revision numbers are brought to their most recent and pinned there until the next deployment. In the administration section, you identify staging and production servers. If you view an affected node from one of the specified staging hosts, you see the latest copy; if you view it from a production host, you see the pinned version.

This workflow is ideal for environments in which fairly frequent milestones are deployed. Because of Drupal’s handy dandy revision system, you can compare versions of the content across pushes to see what’s changed.

The module is hot off the presses this morning and so is probably still buggy and feature-poor, but it’s a start.