International PHP Conference https://phpconference.com/ IPC 2024 Tue, 08 Oct 2024 08:57:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Simplify WordPress Development with WP-CLI https://phpconference.com/blog/simplify-wordpress-development-wp-cli-guide/ Mon, 12 Aug 2024 08:06:37 +0000 https://phpconference.com/?p=86384 What if I told you that you can generate a proper PHP code on a command? Remember, all I'm offering is the truth – nothing more.

Setting up a project today is a project. GIT clone, nvm install, nvm use, npm install, npm run start, fix docker port issue, nvm install && nvm use, npm install, npm run start.. Ahh ok, rm -rf *, git clone…

When you finally make all the different tools work on your machine, it’s time for writing the code - but not just yet. You still have to do some copy/pasting from docs. Where was that page with a code example again?

The post Simplify WordPress Development with WP-CLI appeared first on International PHP Conference.

]]>

Two hours later, after you successfully reorganised your “code snippets” bookmark folder, you have the starter code pasted and now you can start coding. Now, just to quickly clean it up from the things you know you won’t need, maybe five minutes top…

Whew, it’s a good thing you Googled that weird part of code from the snippet because there were at least 3 Stack Overflow answers saying how you’re losing 3-5 milliseconds in performance with that legacy code. This is great: you’ve set up a new project with the latest code AND you’ve learned something new while doing so. All good to go to start coding first thing tomorrow.

You feel like you really accomplished something today, haven’t you? You’ve set up a project!

I hate that. I hate that something equivalent to opening a code editor and running up the gear takes so much time and effort that you feel accomplishment when, in fact, you haven’t even started.

But it doesn’t have to be that way. There is, in fact, a tool for that.

 

IPC NEWSLETTER

All news about PHP and web development

 

The Tool

It’s a wonderful, fast, powerful, open-source, CLI tool. The WP-CLI, command line interface for WordPress.

I’m a huge fan bjmof CLI tools. They do exactly what you tell them, and they even tell you what to tell them. They all work perfectly together even though they are not aware of each other. there is no “new look” or “this button has moved here”. They do exactly what they should, nothing more and nothing less.

As any other CLI tool, WP-CLI too has a global parameter –help with every command and subcommand, which will provide you with a complete documentation for said command or subcommand, altogether with usage examples. You’ll find things you need only once in a while, but there are also commands that are useful in your everyday WordPress development work.

I’m not going to run you through the installation process. It requires 3,5 commands and is pretty straightforward.

It’s worth noting here that WP-CLI is not officially supported on Windows machines. It is possible to install and use it on Windows machines, but some commands can have unexpected outcomes due to various reasons and maintainers don’t guarantee those can be fixed.

Everyday Work

There are different things you have to work on every day and there are only so many commands and flags you can memorise. Personally, I don’t like memorising anything. So, I use a lot of help and the global parameter –prompt. It will prompt you with every flag a command has so you don’t have to worry about typos or missing a required flag.

WP-CLI
Type values unique to your current use case

All you have to do is type values that are unique to your current use case. If you start using only this global parameter, you’re already 70% more productive than someone who doesn’t use it (I made up that stat).

There are many everyday tasks for which I use WP-CLI, too many to cover here. I’ll focus on those that require looking into docs to perform an informed copy/paste.

Install WordPress

My personal motto is “if you’re going to do it more than once, automate it”, regardless of if you’re doing it often or not.

If often, then you’ll be bored of typing the same thing. If rarely, you’ll always waste the same amount of time to figure out what exactly you need to do. Having a bash script requires only memorising that you have it for this specific task, and then using it.

The traditional way

Assuming you’ve already created new database and have a root folder for your install, installing WordPress contains 3 major steps:

  1. Download and unzip the latest version of WordPress
  2. Create config file with all database credentials
  3. Run the install script

Doing it in the traditional way, requires opening at least two different URLs in your browser, a few directories in your file manager, and one file in your code editor.

The WP-CLI way

Doing it in a terminal with WP-CLI, requires 3 commands:

  1. wp core download
  2. wp config create –dbuser=<DB_USER> –dbpass=<DB_PASSWORD> –dbname=<DB_NAME>
  3. wp core install –url= –title=

The lazy way

Automating it requires one command to run one bash script. I’ll create here a bash script for usage in your local environment, just to illustrate how far we can go with automating.

We’ll use those 3 commands from above. The first one doesn’t need any changes, unless, for some reason, you want an older version, in which case you can use –version flag.

wp core download

The second command needs some modification. First, in my local environment I’ll always have the same database user. With older mySQL versions it was root for me but with the latest update I had to change it to a different user. Regardless, it’s always the same and I can hardcode that value.

On the other hand, I never want to hardcode any important password, and I also don’t want any of those passwords being saved in bash history (which happens if you use WP-CLI –prompt). The solution is to use a silent prompt for bash command read.

The database name will be different for every use case so that one can also be handled by read command.

read -p 'Database name: ' dbname
read -sp 'Database password: ' dbpass
wp config create --dbuser=milana --dbpass=$dbpass --dbname=$dbname --prompt

 

These three are mandatory parameters and, most of the time, that’s enough for your install. However, if you need more custom settings, you can add –prompt for prompting all the other parameters.

The third command can be completely generated out of these two values we have. In my local environment, the URL is usually the same as the database name, with addition of .loc: –url=”${dbname}.loc”. This same URL can be used for admin user email address:

--_admin_email="admin@${dbname}.loc"_.```

Website title is not important here, especially if you are going to import a database from a remote website, but the title is a mandatory flag for this command so we can’t skip it. Using a database name will be sufficient: –title=$dbname.

For admin users I usually use admin for both username and password, because in the local environment I don’t really care about secure passwords for users. If you want it more secure for the same amount of work, you can use the database password, or you can put in a bit more effort and add another silent prompt for it.

Now we have something like this:

wp core install --url="${dbname}.loc" --title=$dbname --admin_user=admin --admin_password=admin --admin_email="admin@${dbname}.loc"

 

The script

Putting it all together in a script, we get just a handful of lines that you’ll never have to type again.

Create a new file and call it however you want. Remember, the file name will be a command you’ll run in the terminal. Mine is wp-install. You can add extension .sh but it’s not needed. Copy in the file our 3 commands preceded with Shebang:

#!/bin/bash
wp core download
read -p 'Database name: ' dbname
read -sp 'Database password: ' dbpass
wp config create --dbuser=milana --dbpass=$dbpass --dbname=$dbname --prompt
wp core install --url="http://${dbname}.loc" --title=$dbname --admin_user=admin --admin_password=admin --admin_email="admin@${dbname}.loc"

To run it we must make it executable:

chmod +x wp-install

And move it somewhere in our $PATH so that we can execute it in any directory we want. If you don’t know which directories are in your $PATH, you can run echo $PATH and you’ll see the list of all directories where you can send this file. I like to keep my scripts in

_/home/milana/bin_.

mv wp-install ~/bin/wp-install

Now you can run this script just by typing wp-install, and in a few seconds, you’ll have freshly installed WordPress in that directory.

WP Install
WP Install

Now, setting up this script might have taken a bit more time than you want to spend on installing local WordPress but guess what, that task will last only a few seconds from now on.

The beauty of this is that you can extend this script the exact way you want. You can add plugins or themes you frequently use, set specific language, generate dummy content, or you can make it a part of other scripts. The possibilities are endless.

 

EVERYTHING IS CONNECTED TO THE INTERNET

Explore the Web Development Track

 

Scaffold plugin

Now that we have WordPress installed, it’s time to write some code. Before you head off to docs to find that copy/paste code for your plugin or theme, look at help under scaffold command. Chances are you’ll find a command for what you need.

A completely new plugin is just one command away:

wp scaffold plugin --prompt

Scaffold plugin
Scaffold plugin

Parameters for scaffolding the plugin:

  • Slug (mandatory): The internal name of the plugin. Also, the directory name if not overridden by the next argument.
  • Directory name: Put the new plugin in some arbitrary directory path. Plugin directory will be path plus supplied slug.
  • Plugin name: What to put in the ‘Plugin Name:’ header.
  • Plugin description: What to put in the ‘Description:’ header.
  • Plugin author: What to put in the ‘Author:’ header.
  • Plugin author URI: What to put in the ‘Author URI:’ header.
  • Plugin URI: What to put in the ‘Plugin URI:’ header.
  • Skip tests: Don’t generate files for unit testing.
  • CI: Choose a configuration file for a continuous integration provider. Default is Circle.
  • Activate: Activate the newly generated plugin.
  • Activate network: Activate the newly generated plugin for the whole network (if it is multisite).
  • Force: Overwrite files that already exist.

This will not only create a new plugin in your plugins directory, but it will also generate everything you need to start writing your custom code, tests, use npm, and generate translation files.

Custom Code
Everything to start writing your custom code

On top of that, readme.txt file is the readme template used by wordpress.org, so if you want to host your plugin at WordPress.org plugin repository, you’re all set up. All you have to add now is what’s unique to your plugin.

Scaffold CPT

OK, so now can we start writing code? Well, not quite. I want a custom post type (CPT) in this plugin. This is yet another code snippet you would copy/paste from official docs. And yet, it’s right there, available in your terminal with, you guessed it, only one command.

wp scaffold post-type –prompt

Scaffold CPT
Parameters for scaffolding the post type

 

Parameters for scaffolding the post type:

  • Slug (mandatory): The internal name of the post type.
  • Label: Name of the post type shown in the menu. Usually plural.
  • Textdomain: The textdomain to use for the labels. If not specified, it will take the plugins or themes it’s the part of.
  • Dashicon: The dashicon to use in the menu. Default is admin-post.
  • Theme: Using –theme will add CPT in the active theme; defining a theme slug will place CPT in the provided theme.
  • Plugin: Create a file in the given plugin’s directory. If no theme or plugin is defined, the code will be sent to STDOUT.
  • Raw: Just generate the register_post_type() call and nothing else.
  • Force: Overwrite files that already exist.

Now we have a new directory inside our plugin.

New directory New directory inside our plugin

And to make it work, we need to write the code now. First, we want to prevent direct access to the plugin file and, after that, we will include the file with our CPT code.

// Your code starts here.
if ( ! defined( 'ABSPATH' ) ) {
die();
}
include_once __DIR__ . '/post-types/developers.php';

In only 4 lines of code that we actually wrote, we have a ready to use, fully functional plugin with custom post type.

Rare occasions

Coding in production, anyone? Nobody’s doing it anymore, and for a good reason. In fact, today’s tools and accepted workflows are made to keep you as far away from production as possible and I support that. Really, that’s the smartest thing to do.

However…There are some rare occasions when something has to be done only once and only in production.

Let’s say you need data from an external API to be imported in production. You will do this only once and, after that, the system will just use that data.

Or let’s go further, let’s consider the idea when you must do something that has nothing to do with the project itself. For example, a client asks you to provide them with a list of all users on the website with the company’s email address (because they want a list of staff who have access to the website). You don’t want to do this manually, why would you? Or to spend hours on coding this functionality and going through the whole workflow of GIT pull request and code review, and deployment build. You don’t want this code inside your project’s code base because it has nothing to do with it.

These along with many other reasons, are all valid to do something directly in production, and only once.

The query

The thing I love about WP-CLI is that it can create a very specific query in one command. Doing it with PHP can take 10+ lines of code.

Returning the results of this query inside the loop with PHP can take another 5-10 lines of code, while with WP-CLI we are still in one-liner.

Exporting these results into a file takes, you guessed it, another 5-10 lines of PHP code, while with WP-CLI, you guessed it again, it’s still one line. In fact, with WP-CLI changing the output format is just one parameter. With PHP it’s a bit more complicated.

If I want to list all users and return only their username, display name and email address with WP-CLI, I’d use this command:

wp user list --fields=user_login,user_email,display_name

List all users
List all users and return only username, display name, and email address

The PHP code for the same result is this:

$users = get_users(
[
'fields' => [
'user_login',
'user_email',
'display_name'
],
]
);

PHP Code
PHP Code

If I want to export this list into CSV file with WP-CLI, all I have to do is to point at a file:

wp user list --fields=user_login,user_email,display_name > users.csv

If the file doesn’t exist, one will be created.

This is what it takes with PHP:

$users = get_users(
[
'fields' => [
'user_login',
'user_email',
'display_name'
],
]
);
$file = fopen( 'users.csv', 'w' );
foreach ( $users as $user ) {
fputcsv( $file, (array) $user );
}
fclose( $file );

The rare query

Now that we have established how much faster it is to use WP-CLI for our rare occasion, let’s go back to that unrelated client’s request to export the list of all staff users – users with website’s domain name in their email address.

For this, we’re going to include the search parameters in our query. We want to search for wpcli.loc inside the user_email column.

In PHP that would look like this:

$users = get_users(
[
'fields' => [
'user_login',
'user_email',
'display_name'
],
'search' => '*wpcli.loc',
'search_columns' => [
'user_email'
],
]
);

NOTE: You might have noticed by now, but in case you didn’t – WP-CLI uses the same exact parameters as the PHP part of WordPress so you can easily convert one into another.

With WP-CLI it’s still one liner:

wp user list --fields=user_login,user_email,display_name --search=*wpcli.loc --search-columns=user_email

But the problem is it doesn’t return the desired result. It completely ignores the –search-columns=user_email parameter.

WP CLI Search results
Search results

As you can see in the screenshot above, it returns the user that has no wpcli.loc email but has it in the user_url column. Obviously, in this case you can change the search query to *@wpcli.loc and it will give you desired results, but you will never be sure that results are accurate because there are other fields in which this search keyword can appear, one of them being user description.

Combine what’s working

One way to overcome this and still use the fast way for completing the task, is to combine WP-CLI and PHP, using the things that work in each. We’ll use the query from PHP and execution from WP-CLI.

Create a new PHP file in WordPress root.

touch get-users.php

And place our PHP query inside it.

$users = get_users(
[
'fields' => [
'user_login',
'user_email',
'display_name'
],
'search' => '*wpcli.loc',
'search_columns' => [
'user_email'
],
]
);
$file = fopen( 'users.csv', 'w' );
foreach ( $users as $user ) {
fputcsv( $file, (array) $user );
}
fclose( $file );

To execute it, we’ll use WP-CLI:

wp eval-file get-users.php

Create a new PHP file in WordPress root
Create a new PHP file in WordPress root

And it’s done! That’s all it takes. The last thing I don’t like is the lack of feedback. I can’t see if it’s completed or anything that’s happening. Luckily, WP-CLI has an internal API that can help with it.

CLI feedback

First, I want to know that users are being processed. This can be done with WP_CLI::log.

foreach ( $users as $user ) {
WP_CLI::log( sprintf( 'Reading user: %s.', $user->display_name ) );
fputcsv( $file, (array) $user );
}

Then I want to know if the file has been created.

$filename = 'users.csv';
$file = fopen( $filename, 'w' );
foreach ( $users as $user ) {
WP_CLI::log( sprintf( 'Reading user: %s.', $user->display_name ) );
fputcsv( $file, (array) $user );
}
fclose( $file );
if ( ! file_exists( $filename ) ) {
WP_CLI::error( sprintf( 'Failed to create %s file.', $filename ) );
} else {
WP_CLI::success( sprintf( 'Created %s file.', $filename ) );
}

And finally, I want to see a nice table of all the exported users. For that I’m going to use WP_CLI\Utils\format_items(), and to make it work I’ll change a few things in existing code.

First parameter is $format and for that I want to use a table.

WP_CLI\Utils\format_items( 'table', $items, $fields );

Second parameter is $items, an array of arrays where each array is one user.

$items = [];
foreach ( $users as $user ) {
WP_CLI::log( sprintf( 'Reading user: %s.', $user->display_name ) );
fputcsv( $file, (array) $user );
$items[] = (array) $user;
}

And the last parameter, $fields, is an array of table headings for which we can use the same things WP-CLI does – field names (or database columns names).

$fields = [
'user_login',
'user_email',
'display_name'
];
$users = get_users(
[
'fields' => $fields,
'search' => '*wpcli.loc',
'search_columns' => [
'user_email'
],
]
);

The whole file now looks like this:

<?php
/**
* Get all users with `wpcli.loc` in their email address
* and export them to `users.csv` file.
*
* Execute this script with:
* wp eval-file get-users.php
*/
$fields = [
'user_login',
'user_email',
'display_name'
];
$users = get_users(
[
'fields' => $fields,
'search' => '*wpcli.loc',
'search_columns' => [
'user_email'
],
]
);
$filename = 'users.csv';
$file = fopen( $filename, 'w' );
$items = [];
foreach ( $users as $user ) {
WP_CLI::log( sprintf( 'Reading user: %s.', $user->display_name ) );
fputcsv( $file, (array) $user );
$items[] = (array) $user;
}
fclose( $file );
if ( ! file_exists( $filename ) ) {
WP_CLI::error( sprintf( 'Failed to create %s file.', $filename ) );
} else {
WP_CLI::success( sprintf( 'Created %s file. Following users have been exported:', $filename ) );
}
WP_CLI\Utils\format_items( 'table', $items, $fields );

Now we have a useful script with feedback about progress and status.

Useful script with feedback about progress and status.t
A useful script with feedback about progress and status.

You can go further and make it more refined, cover more error cases, even run WP-CLI commands from this script.

You can also easily import users from this file into another WordPress website (or the same if needed) with wp user import-csv command.

 

IPC NEWSLETTER

All news about PHP and web development

 

Conclusion

Whether you’re a seasoned WordPress developer or you get occasional WordPress projects every once in a while, WP-CLI is a tool worth looking at and trying out. It will provide you with everything you need to start coding that custom feature, including testing suite, task runner, translation files and all of that by following recommended best practices and WordPress coding standards.

Combining it with other CLI tools and PHP makes it even more powerful in a way that it can perform tedious and complex tasks with such precision and speed, you will wonder how you could ever live without it. Development before WP-CLI? I don’t even remember it.

The post Simplify WordPress Development with WP-CLI appeared first on International PHP Conference.

]]>
Mastering Laravel Admin Panels: The Power of Filament https://phpconference.com/blog/filament-php-admin-panel-laravel-integration/ Thu, 20 Jun 2024 10:26:19 +0000 https://phpconference.com/?p=86303 When searching for a good PHP administration panel, it's hard to tell your options apart. They're all open source, they all look professional and modern and they all offer a wealth of features and plug-ins. Thanks to Filament, developers can now end their search with Laravel.

The post Mastering Laravel Admin Panels: The Power of Filament appeared first on International PHP Conference.

]]>

The Filament framework provides a modern UI and a very extensive variety of features and components. It enables perfect, smooth integration into any Laravel project. Developers can benefit from the outstanding developer experience that they’ve come to expect from the Laravel universe. But can the framework fulfil all the requirements of your admin panel and also guarantee sustainable, scalable integration? We’ll address these questions and more in this article.

A home match for Laravel developers

Laravel [1] has been established as one of the most popular PHP frameworks in recent years thanks to its extensive features, good developer experience and great community support. Filament [2] builds on Laravel’s foundation and offers an extensive selection of user-friendly components for creating a powerful administration panel. The familiar convenience that developers appreciate so much about Laravel isn’t neglected. This is because the technical basis of the framework is the TALL stack [3], which is made up of four web technologies:

  • Tailwind CSS: the modern, utility-based CSS framework [4]
  • Alpine.js: the lightweight JavaScript library [5]
  • Laravel : the popular PHP framework with a wide range of functions and tools
  • Livewire: a Laravel library that simplifies development of interactive, dynamic web applications with server-side rendering [6]

This stack is already used in many popular applications and websites in the Laravel ecosystem. It’s a very good basis and Laravel enthusiasts will feel right at home. You can get started on development without any major hurdles.

IPC NEWSLETTER

All news about PHP and web development

 

Even without any prior knowledge, Filament makes it very easy to get started with the framework. The modern, colourful website at www.filamentphp.com presents an extensive live demo of the framework’s many components and possibilities (Fig. 1). A glance at the detailed documentation makes us want to get started with development ourselves.

Fig. 1: Live demo on https://demo.filamentphp.com/

A finished admin panel in under a minute?!

Once the framework is installed and the first components are created, the tagline “Accelerated Laravel Development” from the website becomes clear (Fig. 2).

Once a Laravel project is set up and started, it takes less than a minute to integrate Filament into the project and for the first user to log in.

Fig. 2: Accelerated Laravel Development with Filament

So let’s get started! As usual, we install the package with Composer:

composer require filament/filament:"^3.2" -W

So far, so good! Anyone with Laravel experience knows that the framework offers a variety of Artisan commands to automatically create components and execute other processes over the command line. After intalling Filament, this command set is extended to install components specifically for Filament (Listing 1).

filament:install                      Install Filament.
make:filament-page                    Create a new Filament page class and view
make:filament-panel                   Create a new Filament panel
make:filament-relation-manager        Create a new Filament relation manager
make:filament-resource                Create a new Filament resource class
make:filament-user                    Create a new Filament user
...

Let’s use one of these commands directly to install filament together with our first panel:

php artisan filament:install --panels

After a few seconds, installation is complete. In order to log in to our panel, we need a user. Filament uses our Laravel project’s user model by default. But if we don’t have any entries in our users-table, we can easily create one with the following command:

php artisan make:filament-user

Et voilà! In less than a minute, we’ve created a complete administration panel with authentication, an initial test user, a dashboard page, a few widgets, and even an integrated dark mode. Once we’ve logged in using the automatically generated log-in form, we see a minimalist, well-designed dashboard that we want to fill with content. But how can that be? We haven’t written a single line of code or edited any configuration files, defined routes, created views or customised environment variables. Nevertheless, everything works out-of-the-box. If we look at our project structure, apart from a few JavaScript components, we only see a single PHP file that has been added to our project: app/Providers/Filament/UserPanelProvider.php. Let’s take a closer look.

YOU LOVE PHP?

Explore the PHP Core Track

 

The PanelProvider – The heart of Filament

The PanelProvider is the core of every Filament dashboard. It tells Laravel that this panel is now active in our project. During installation, we were asked to name our first panel, which we called user (Listing 2).

public function panel(Panel $panel): Panel {
  return $panel
    ->default()
    ->id('user')
    ->path('user')
    ->login()
    ->colors(['primary' => Color::Amber])
    ->widgets([Widgets\AccountWidget::class])
    ->pages([Pages\Dashboard::class])
    ->authMiddleware([Authenticate::class])
    // ... 
}

A quick look at the clear and easy-to-read class quickly reveals why everything worked so smoothly. All important settings needed for our panel’s functionality are defined here using meaningful methods. These include the path where our panel can be accessed (/user) and the middleware used to authenticate our users. Here, Filament simply used the existing Laravel authentication, which is exactly what we wanted. Of course, default settings can be customised and extended according to our project requirements.

The default() method defines that this panel is our default panel. Since Filament version 3, you can run several panels in parallel in one project. A corresponding PanelProvider with its own authentication, pages and components is created for each user type, e.g. an AdminPanelProvider for administrators and a UserPanel for users, administrators and a UserPanelProvider for end customers. But for today, one panel is enough. So let’s start filling it with life.

One resource to rule them all

There’s currently only one user in the database. We created it earlier with the command line. Now we’d like to create more users who can use the dashboard. We’ll need the corresponding pages, tables, forms, and routes to implement the CRUD operations (Create, Read, Update, Delete) for our user model. Our first filament resource forms the basis.

A resource corresponds to an eloquent model in our Laravel project. So, we create a corresponding UserResource for the user- model. Of course, Filament also provides a simple Artisan command:

php artisan make:filament-resource User --view

And with that, we’re (almost) finished. Besides the UserResource.php file, Filament’s magic created a few more components:

  • a CreateUser – page with a form for creating new users
  • an EditUser – page with a form for editing users
  • a ViewUser- page for viewing a user
  • a ListUsers– page for displaying all users
  • all necessary routes and URLs
  • a link in the navigation to the ListUsers page.

In other words, it’s everything we need, and all with a single command line call. Let’s take a moment to appreciate how many days or weeks of programming we’ve just been relieved of. The only thing we need to do now is define which of the form’s fields and columns in the table should be displayed. A simple user resource might look like Listing 3.

class UserResource extends Resource {

  protected static ?string $model = User::class;

  public static function form(Form $form): Form {
    return $form->schema([
      TextInput::make('name'),
      TextInput::make('email'),
    ]);
  }

  public static function table(Table $table): Table {
    return $table
    ->columns([
      TextColumn::make('name'),
      TextColumn::make('email'),
      TextColumn::make('created_at'),
    ]);
  }

  public static function getPages(): array {
    return [
      'index' => Pages\ListUsers::route('/'),
      'create' => Pages\CreateUser::route('/create'),
      'edit' => Pages\EditUser::route('/{record}/edit'),
      'view' => Pages\ViewUser::route('/{record}/view'),
    ];
  }
}

This means that CRUD implementation of our UserModel is already complete and we can move on to the next model. But we want to embellish it a little more and optimise our dashboard’s user experience. With the UserResource as the basis, we can configure the forms and tables as we wish. We have many components at our disposal for this, which the Filament documentation covers in detail [7], including detailed tutorials and application examples.

IPC NEWSLETTER

All news about PHP and web development

 

Creating tables

The table object in our UserResource gives us control over our table’s appearance and functionalities.The most important components for displaying individual table fields are the columns. Depending on the table field the corresponding column type, e.g. TextColumn, IconColumn or ImageColumn. The corresponding properties for the table field are defined using configuration methods (Listing 4).

public static function table(Table $table): Table {
  $table->columns([
    TextColumn::make('name')->searchable()->sortable(),
    TextColumn::make('email')->searchable()->badge()->color('secondary'),
    IconColumn::make('active')->boolean(),
    TextColumn::make('created_at')->dateTime('d.m.Y H:i')->sortable(),
  ])
  ->paginated([25, 50, 'all'])
  ->filters([TernaryFilter::make('active')]);
}

For example, the dateTime value created_at can be converted to a European format, the email field can be displayed as a badge or the active flag can be displayed in the form of a green tick or red X using IconColumn. It’s especially noteworthy that with the searchable() method on the name and email fields, we can perform a full-text search on the user model without having to write complex queries. What takes many hours of work in other systems, Filament can solve in just 14 characters of source code.

We can further restrict the search with filters defined with the filters() method. It’s just as easy to make individual columns sortable with the sortable() method or implement pagination for the entire table with paginated() (Fig. 3).

Fig. 3: UserResource table

There are practically no limits. Even if existing columns aren’t sufficient, you can create custom columns where you can define logic, including your own view. The full range of functions and the table components’ performance cannot be covered in full here. I refer you to the excellent documentation at https://filamentphp.com/docs.

Creating forms

The most important part of CRUD operations are forms to create and edit data. These can also be easily mapped in the Filament resource using the form() methodand the form object(Listing 5).

public static function form(Form $form): Form {
  return $form->schema([
    TextInput::make('name')->label(__('user.label.name'))
      ->required()->string()
      ->minLength(3)->maxLength(64),
    TextInput::make('email')->label(__('user.label.email'))
      ->required()->email()
      ->maxLength(64),
    Toggle::make('active')->label(__('user.label.active'))
      ->default(true),
  ]);

The schema is adopted by default for the Create and Edit forms, although this can be overwritten individually on the CreateUser and EditUser pages. After all, the same fields are not always required in both forms (Fig. 4).

Fig. 4: Edit Form of UserResource

Depending on the type of the respective form field, you can now choose from a variety of input fields such as text input, drop-down, radio button, date-time picker, tags, rich text editor, and many more. Filament lacks nothing here and surprises with a clever and elegant implementation for every component.

Implementation allows you to configure each field according to your wishes. These configuration options are also explained in detail in the documentation.

 

The components provide a variety of helper methods for validating the forms. Laravel fans will be delighted, as these are modelled on Laravel’s existing validation rules. So, in addition to simple rules like required and string, more complicated ones like required_without_al l, are also available in the form of the requiredWithoutAll() method.

The Grid, Fieldset, and Tab classes are available to customise the form’s layout, allowing you to arrange the input fields in different ways. The Wizard class can be used to create a complete wizard with several steps without making the code more complex or confusing.

One particularly powerful tool is the FileUpload class, which lets us upload several files at once without much effort. When dealing with image files, we can use the imageEditor() method to activate a complete image processing tool.

There’s an image processing tool we can use to apply filters, crop, or scale each image directly after uploading (Listing 6). File and image image processing has never been so easy!

FileUpload::make('images')
  ->label('Bilder hochladen')
  ->acceptedFileTypes(['image/jpg', 'image/jpeg'])
  ->preserveFilenames()
  ->disk('s3')
  ->multiple()
  ->image()
  ->imageEditor()
  ->imageResizeMode('cover')
  ->imageCropAspectRatio('1280:720')

It is definitely worth taking a look at the documentation here. You’ll be surprised at what Filament’s developers thought of. And if you can’t find something, there’s a good chance that other users have already written a plug-in that can be found on the Filament website and (mostly) downloaded for free.

php artisan make:filament-page Settings

With this command, we create an empty settings page that’s automatically integrated into our navigation and gives our creativity free rein. As Laravel/Livewire developers, our work is made even easier. If you rummage through the source code and take a closer look at the BasePage class, where our new settings page inherits, you’ll notice that behind the variety of functionalities and the framework’s magic, there is simply a normal Livewire component:

Creating customised pages

By creating the UserResource, we’ve already learned about different types of pages. By default, Filament provides pages such as ListRecords, CreateRecord, EditRecord, and ViewRecord to process our data. Although these are enough for implementing the CRUD functionalities, as creative developers we quickly reach the point where we want to integrate something of our own into our new dashboard. Filament lets us to create customised pages for this:

abstract class BasePage extends Livewire\Component { /* ... */ }

This is extremely good news, since we benefit from the advantages of both worlds. We have access to the many Filament tools like forms, tables, widgets, and actions. We also utilise the full power and flexibility of Livewire and can define our blade.php view and work in the usual Livewire manner with the render() and mount() methods to implement our own logic (Listing 7).

class Settings extends Page {

  protected static string $view = 'filament.pages.settings';

  public function mount(): void {
    //
  }

  public function render(): View {
    //
  }
}

Although Filament already covers the majority of our requirements with its components, implementing custom pages through CustomPages offers maximum freedom and flexibility in designing our dashboard.

The ability to work as usual and not have to get used to a completely new architecture makes it very easy to get started with the framework and speeds up development immensely.

EVERYTHING IS CONNECTED TO THE INTERNET

Explore the Web Development Track

 

Creating actions

In the world of Filament, actions describe the ability to interact with the panel using a button or link, whether to perform operations on a page, edit a record, or simply call up a URL. Some of these actions were already added automatically when we created our resource (Listing 8).

public static function table(Table $table): Table {
  return $table
  // ... 
  ->headerActions([
    CreateAction::make()
  ])
  ->actions([
    EditAction::make(),
    DeleteAction::make()
  ]);
}

In addition to these ready-made actions, we also have the option of defining our own actions. For this, the action class also offers a variety of auxiliary methods where we can precisely define their appearance and functionality.

Let’s assume we want to create a green button for each user record in our table with which we can activate this user. But before the operation is executed, a modal should appear for confirmation. The button should only be displayed if the user is not yet activated.

If we wanted to develop this feature from scratch, it could become fairly complex, especially due to a delayed execution of the operation after confirmation from the modal. Even experienced full-stack developers would probably need a few hours to work out a sustainable architecture for this process and implement it. In Filament, the implementation looks like Listing 9. An elegant, minimalist, easy-to-read implementation of a fairly complex logic. This is what makes software development fun.

Action::make('activateUser')
  ->button()
  ->color('success')
  ->icon('heroicon-o-check-circle')
  ->label(__('user.activate'))
  ->requiresConfirmation()
  ->modalDescription(__('user.activate.description'))
  ->hidden(function (User $record): bool {
    return (bool) $record->active;
  })
  ->action(function (User $record): bool {
    return $record->update(['active' => true]);
  }),

The RelationManager – Visualisation of Laravel relationships

Laravel offers an elegant way to work with linked tables with the Eloquent Relationships. If a user has several posts, this is implemented with a HasMany relationship in the form of a posts() method in the user model. You don’t need to define complicated SQL queries or joins. Here too, Filament utilises this simple implementation to implement the representation of linked records of a resource elegantly in the form of the RelationManager.

The RelationManagers are implemented in a resource and displayed as a table on the record’s view and edit page. An Artisan command also exists for creating a RelationManager:

php artisan make:filament-relation-manager UserResource posts title

This creates the PostsRelationManager.php file. As we;ve passed our UserResource, the relation posts that we’ll use and the attribute title the table will display as parameters to the command, the RelationManager knows directly where it gets its data from and how to display it. This way, we can create any number of RelationManagers for all existing HasMany or ManyToMany model relations. For the documents relation of our user model, this would be a DocumentsRelationManager.php. Now we just have to tell our UserResource that it should also use these relations (Listing 10).

public static function getRelations(): array {
  return [
    PostsRelationManager::class,
    DocumentsRelationManager::class
  ];
}

Even if this is getting a bit boring, there’s nothing more for us to do. For each of these RelationManagers, a tab is rendered in the view. Behind that, there’s a table of the corresponding linked records, together with automatically generated action buttons for creating, editing, and deleting the entries (Fig. 5). So we don’t have to write a single line of code to process the user’s posts. Filament’s magic does this automatically based on underlying Laravel conventions.

Fig. 5: Displaying linked records with a RelationManager

The structure of a RelationManager class is very similar to a resource. Here we also have the same options for configuring the class to add additional columns to the table, add new functions or buttons, adjust the title in the tab with heading(), or adjust the Eloquent Query of the underlying relation with modifyQueryUsing() to filter the list of data records in advance (Listing 11).

class PostsRelationManager extends RelationManager
{
  protected static string $relationship = 'posts';

  public function table(Table $table): Table {
    return $table
    ->heading(__('post.title.new-posts'))
    ->modifyQueryUsing(function (Builder $query) {
      return $query->where('status', 'new');
    })
    ->columns([
      TextColumn::make('title')->label(__('post.label.title')),
      TextColumn::make('content')->label(__('post.label.content')),
      TextColumn::make('status')->badge(),
    ])
    ->headerActions([
      CreateAction::make(),
    ])
    ->actions([
      EditAction::make(),
      DeleteAction::make(),
    ]);
  }
}

Testing Filament components

To ensure that our admin panel continues to function smoothly in the future, it’s advisable to write feature tests for the individual components. But Laravel or Livewire developers are also picked up directly here. Since most Filament components are based on Livewire, they can be tested just as elegantly and easily. For this, the existing practical testing helpers from Livewire have been extended to test the various functionalities of Filament’s forms, tables, and actions. This way, we can check whether our activateUser-action has the desired effect or if the sorting in our user table works. Incidentally, the test environment is based on PestPHP [8], a very good testing framework from the Laravel universe (Listing 12). But it is not a prerequisite for writing tests for Filament – PHPUnit can also be used as usual.

it('can activate users', function () {
  $user = User::factory()->create(['active' => false]);
  Livewire::test(ListUsers::class)
    ->callTableAction('activateUser', $user);
  expect($user->refresh()->active)->toBeTrue();
});

it('can sort users by name', function () {
  $users = User::factory()->count(10)->create();
  Livewire::test(ListUsers::class)
  ->assertTableColumnExists('name')
  ->sortTable('name', 'desc')
  ->assertCanSeeTableRecords($users->sortByDesc('name'), inOrder: true);
});

Great power comes with great responsibility

By now, it should be clear. Filament takes a lot of work off our hands thanks to its powerful components and outstanding developer experience. We only have to write a small amount of PHP code, define SQL queries, build complex architectures, or create blade views manually. This undoubtedly speeds up development, but it doesn’t only have advantages. Ultimately, we transfer considerable responsibility to the framework. We configure our project and rely on Filament to do the rest based on its conventions. Of course, this can lead to problems.

IPC NEWSLETTER

All news about PHP and web development

 

Here’s a specific example. Suppose we want to add a column to our UserResource table. But instead of using a conventional database field like before, we use an eloquent attribute from our user model. These attributes are generated dynamically and can contain complex logic depending on the use case, for example, by accessing a HasMany relation. But we’d also like to create a select drop-down menu for each entry in the table, whose options are also generated dynamically from values in the database. If the table’s pagination is set to SHOW ALL when the page is called up, this could lead to triggering over 1,000 database queries (this example is based on real events). Although we hardly wrote any code, we managed to implement processes that place an unnecessarily high load on our production system under the wrong circumstances.

The simple application of the framework often obscures how much complexity is actually behind some components. It’s advisable to use tools like Laravel Debugbar [9] or Laravel Telescope [10] to server requests, database queries, and the platform’s general performance during development. This way, we can check our code’s complexity before we deploy the new feature to the production environment.

Flexibility through complexity

In our experience with Filament, it’s clear that we can already cover the majority of use cases by consistently adhering to the framework conventions. However, for exceptional situations where standard tools aren’t enough, Filament gives us the option to overwrite the existing logic and implement our own. For example, we can customise the query for saving a data set or to set special values only under certain conditions. This is possible because the component’s helper methods can accept a closure as a parameter. We’ve already used this logic when creating our activateUser action (Listing 9). So instead of using a string or Boolean as a parameter, we can define a complex logic, wrap it in a closure, and then pass it to the method. For instance, if we only want to make an EditAction visible to administrators, we can implement this as seen in Listing 13.

// Definition of method in Filament code
public function visible(bool | Closure $condition = true): static { 
  $this->isVisible = $condition;
  return $this;
}

EditAction::make()
->visible(function () {
  return auth()->user()->is_admin; 
});

As powerful as this approach is, it opens the door to errors, poor readability, and unclean code. Depending on the passed closure’s complexity, elegant implementation can quickly become unreadable spaghetti code. It’s all the more important to outsource more complex processes to keep the code base clear and clean.

Accelerated Laravel Development

The slogan Filament advertises on its website lives up to its promise. I’ve already developed several projects with the framework over the past few years and am always impressed by how much work is taken off my hands thanks to the wide range of components and the uncomplicated application. After implementing a ticket, I was often amazed when I only needed half the estimated time to realise a complex feature.

The excellent documentation, simple learning curve, and rapid success in implementing new features make working with the framework really fun. Even new developers quickly get a feel for the software and can get started straight away without extensive familiarisation. Not even much experience in front-end topics such as HTML, CSS, or JavaScript is required. Solid PHP and Laravel knowledge is completely sufficient and already lets you enable the realisation of a large part of your requirements for an admin panel. And for everything else, a quick look at the Livewire or Tailwind documentation is often enough.

When implementing more complex processes that aren’t described in the documentation, things can sometimes get a little bumpy. This is because it’s not always immediately clear which method you need to use to overwrite the default logic or which parameters the closure requires for the component to function properly. At this point, you’ll often find yourself rummaging through individual classes in the vendor/filament folder to familiarise yourself with the implementation and the magic behind the framework. Practice makes perfect. An overview of the advantages and disadvantages of Filament is summarised in Table 1.

Advantages Disadvantages
Very good documentation Freedom and flexibility invite you to write unclean code
Simple learning curve and quick success intransparent “Magic” can influence performance
Good addition to the Laravel-ecosystem
Very good developer experience
Wide variety of components
Hardly any front-end skills necessary
Many plugins and extension
Lots of articles and tutorials
Large community

Table 1: Advantages and disadvantages of Filament

Conclusion

In my many years as a web developer, I’ve worked with several admin panels. Although all of them somehow fulfilled their purpose, I don’t remember any of them as a perfect solution. The limits of these panels often quickly became apparent when the complex requirements could only be implemented by leveraging the predefined logic or by hacking configuration files.

And even Laravel Nova [11], the official administration panel from Laravel, couldn’t fully convince me for several reasons.

It was only with Filament that I felt I found a panel that was suitable for a quick start and would also accompany the development of the entire platform in the long term. The framework provides a solid foundation that is not only ready for immediate use, but also flexible enough to meet growing requirements and challenges for the future.

To summarise, I clearly recommend Filament. Especially for Laravel developers, the framework is the perfect complement to Laravel’s elegant implementation and simple workflows. But even for PHP purists, a look at Filament is definitely worthwhile. It’s a prime example of how simple and efficient modern web development can be.

 


Links & Literature

[1] https://laravel.com

[2] https://filamentphp.com

[3] https://tallstack.dev

[4] https://tailwindcss.com

[5] https://alpinejs.dev

[6] https://livewire.laravel.com

[7] https://filamentphp.com/docs

[8] https://pestphp.com

[9] https://github.com/barryvdh/laravel-debugbar

[10] https://laravel.com/docs/10.x/telescope

[11] https://nova.laravel.com

The post Mastering Laravel Admin Panels: The Power of Filament appeared first on International PHP Conference.

]]>
Symfony 7 Released: Focuses on Streamlining and Future Features https://phpconference.com/blog/symfony-php-framework-7-released-what-to-know/ Tue, 30 Apr 2024 08:50:50 +0000 https://phpconference.com/?p=86075 Symfony 7, the latest major release for the popular PHP framework, is here! This release prioritizes internal housekeeping and prepares your applications for upcoming features. While it doesn't introduce new functionalities, Symfony 7 offers a smoother path to future advancements.

The post Symfony 7 Released: Focuses on Streamlining and Future Features appeared first on International PHP Conference.

]]>

Major Symfony releases like version 7 typically address cleanup and deprecate outdated features. New features are introduced in minor releases, with Symfony 6.4 being the latest example. This release schedule ensures stability for applications on long-term support releases. Upgrading to Symfony 7 is recommended for those who want to leverage the features coming in Symfony 7.1, which is expected by the end of May.

Symfony 7 does offer some improvements under the hood. It removes the previously deprecated templating functionalities for PHP-based templates, focusing solely on Twig engine support. Additionally, the release introduces a few new components to enhance development workflows.

  • WebHook and Remote Event: This component simplifies receiving notifications from external systems and managing how your application responds to them.
  • Scheduler: Schedule tasks to run periodically in the background without affecting your application’s performance.
  • Asset Mapper: This component makes it easy to manage your application’s assets. It versions assets for efficient browser caching and utilizes the import map API, allowing modern browsers to import JavaScript files directly, eliminating the need for a bundler.

YOU LOVE PHP?

Explore the PHP Core Track

 

Migrating your application to Symfony 7 is relatively straightforward. By addressing any deprecated code usage flagged by your profiler or static analysis tools, you’re well on your way. Additionally, ensure your dependencies are compatible with Symfony 7. Upgrading them or contributing to their compatibility might be necessary. Finally, thorough testing after the upgrade is crucial.

In essence, Symfony 7 lays the groundwork for future innovations. Upgrading now ensures a smoother transition to the exciting features coming in Symfony 7.1 and beyond. Don’t hesitate to explore the new components and leverage Symfony’s open-source nature to contribute to its ongoing development!

IPC NEWSLETTER

All news about PHP and web development

 

The post Symfony 7 Released: Focuses on Streamlining and Future Features appeared first on International PHP Conference.

]]>
Unlocking PHP 8.3 https://phpconference.com/blog/php-8-3-new-features-enhancements-guide/ Mon, 05 Feb 2024 09:02:41 +0000 https://phpconference.com/?p=85971 The final version of PHP 8.3 was released recently in November of 2023. As with every year, there are a number of new features and bug fixes, as well as deprecations and breaking changes that need to be considered before updating to PHP 8.3.

The post Unlocking PHP 8.3 appeared first on International PHP Conference.

]]>

The highlight of every new version is, of course, the new features. They often help us to simplify our code and program more securely. Version 8.3 also includes a few adjustments that allow PHP to provide us with better error handling and enable us to keep an even closer eye on our code. This article is intended to provide an overview of the most important changes. A complete overview of all major and minor changes can be found in the official release notes [1].

Cloning of readonly classes

You want to clone an object, but instead you only get an error message from PHP. Anyone who uses the readonly properties of PHP 8.1 or the readonly classes of PHP 8.2 may already be familiar with this problem. This behavior has been adjusted in PHP 8.3. In the magic method __clone, readonly properties of an object can now be overwritten (Listing 1) [2].

class PHP {
  public string $version = '8.3';
}
 
readonly class Foo {
  public function __construct(
    public PHP $php
  ) {}
 
  public function __clone(): void {
    $this->php = clone $this->php;
  }
}
 
$instance = new Foo(new PHP());
$cloned = clone $instance;
 
$cloned->php->version = '8.3'; 

IPC NEWSLETTER

All news about PHP and web development

 

Type-safe constants in classes

Constants are a convenient tool for storing and retrieving fixed values. Once defined, they provide a reliable source of consistently identical data, which is not the case in PHP. A child class can overwrite the constant of a parent class. And not only that, since constants could not previously have a type, a string in a parent class could become an array in the child class, for example. This problem has since been addressed in PHP 8.3 and you can define the class constants in a type-safe way (Listing 2) [3].

interface I {
  const string VERSION = '1.0.0';
}
 
class Foo implements I {
  const string VERSION = [];
}
 
// Fatal error: Cannot use array as value
// for class constant Foo::PHP of type string 

Dynamic call of class constants

Let’s stick with the topic of class constants, up until now, these could only be called dynamically in a roundabout way. To do this, you had to use the constant() method, as a direct dynamic call was previously not possible here. Luckily, PHP 8.3 has been adapted accordingly. Constants can now be called dynamically with the same syntax as we already know from the dynamic call of class properties. However, this change not only applies to constants, but has also been implemented for the enums introduced in PHP 8.1 (Listing 3) [4].

class Foo {
  const PHP = 'PHP 8.3';
}
 
$searchableConstant = 'PHP';
 
var_dump(Foo::{$searchableConstant}); 

 

#[\Override] attribute

With the new #[\Override] attribute, child class methods can be marked to emphasize the deliberate overriding of a method of the parent class. Incidentally, this allows errors in the method definition to be intercepted, as PHP 8.3 issues an error if this method doesn’t exist in the parent class. So instead of looking for an error why the method you want to overwrite is not called, for example, because of a typo in the name, PHP now provides error messages to clearly indicate any issues with method definitions. Additionally, if you modify a parent class and inadvertently remove a method that has been overridden by a child class, you will now be notified with an error message (Listing 4) [5].

use PHPUnit\Framework\TestCase;
 
final class MyTest extends TestCase {
  protected $logFile;
 
  protected function setUp(): void {
    $this->logFile = fopen('/tmp/logfile', 'w');
  }
 
  #[\Override]
  protected function taerDown(): void {
    fclose($this->logFile);
    unlink('/tmp/logfile');
  }
}
 
// Fatal error: MyTest::taerDown() has #[\Override] attribute,
// but no matching parent method exists 

json_validate() function

JSON is the method of choice in many interfaces when it comes to data exchange. So it’s quite surprising that you can’t avoid parsing a JSON string in PHP to validate it and check whether an error has occurred. That’s no longer the case with PHP 8.3, where there is the new json_validate() method to check whether it is valid JSON before further use. So if you are not interested in the content, but only in the fact that the JSON is valid, you have a new method here that also works more efficiently than a json_decode(), which was previously the only way to check [6], [7]:

var_dump(json_validate('{ "test": { "foo": "bar" } }')); // true

New Randomizer::getBytesFromString() method

With the random extension introduced in PHP 8.2, PHP has taken a real and important step towards cryptographically correct random methods. PHP 8.3 introduced the new method Randomizer::getBytesFromString(), which is passed a string of arbitrary characters that should make up the randomly generated string (Listing 5) [8], [9].

// A \Random\Engine may be passed for seeding,
// the default is the secure engine.
$randomizer = new \Random\Randomizer();
 
$randomDomain = sprintf(
  "%s.example.com",
  $randomizer->getBytesFromString(
    'abcdefghijklmnopqrstuvwxyz0123456789',
    16,
  ),
);
 
echo $randomDomain; 

YOU LOVE PHP?

Explore the PHP Core Track

 

New Randomizer::getFloat() and Randomizer::nextFloat() methods

In addition to the getBytesFromString() method, the Randomizer class now has two more methods which return a random float. Randomizer::getFloat() returns a random float whose limits can be defined as desired using the parameters $min and $max; a third parameter can be used to specify whether or not the limit values should be included in the pool of expected random numbers. Randomizer::nextFloat(), on the other hand, returns a float between 0 and 1 and is therefore equivalent to Randomizer::getFloat(0,1, \Random\IntervalBoundary::ClosedOpen) (Listing 6) [10], [11].

$randomizer = new \Random\Randomizer();
 
$temperature = $randomizer->getFloat(
  -89.2,
  56.7,
  \Random\IntervalBoundary::ClosedClosed,
);
 
$chanceForTrue = 0.1;
// Randomizer::nextFloat() is equivalent to
// Randomizer::getFloat(0, 1, \Random\IntervalBoundary::ClosedOpen).
// The upper bound, i.e. 1, will not be returned.
$myBoolean = $randomizer->nextFloat() < $chanceForTrue; 

PHP linter with support for multiple files

A practical command on the command line is php -l. This can be used to check any PHP file for syntax errors. With PHP 8.3, it is now possible to validate not just one, but any number of files at once. Not much has changed in terms of the output; for each additional file, an additional line is output to indicate whether the file contains syntax errors or not [12]:

php -l foo.php bar.php
No syntax errors detected in foo.php
No syntax errors detected in bar.php

New classes, interfaces and functions

Of course, these are not all the changes that PHP 8.3 has to offer, but they are definitely the most important in the daily life of a PHP developer [13]. The DOM classes DOMElement, DOMNode, DOMNameSpaceNode and DOMParentNode have received new additional helper methods to simplify navigation in the DOM of HTML and XML documents. IntlCalendar has received new helpers to set date and time, and IntlGregorianCalendar has received two new methods to create a calendar based on a date and time. With mb_str_pad there is a function that works analogously to str_pad, but supports multibytestrings. To increment and decrement an alphanumeric string, you can use the str_increment and str_decrement functions from PHP 8.3 onwards.

Deprecations and breaking changes

In the latest version of PHP, there are again deprecations that will be removed in later versions, but there are also a few changes that alter the functionality of existing code [14]. Incorrect data when using PHP’s Date/Time extension has previously led to warnings or errors in the form of \Exception or \Error. These were not always easy to handle, as no specific exceptions were thrown. This changes with PHP 8.3, for example, there is now a general DateException for all errors caused by dates that generate an error when parsing. The DateException is implemented in several child exceptions such as the DateInvalidTimeZoneException. When initializing an empty array with a negative index n, PHP 8.3 ensures that the next key is not 0 but n + 1.

IPC NEWSLETTER

All news about PHP and web development

 

Outlook for PHP 8.4

Of course, the PHP developers aren’t just standing around after version 8.3’s release. The first changes for PHP 8.4 have already been announced [15]. For example, the parsing of HTML5 documents with the DOM extension is to be simplified and there are a few changes to the just-in-time compiler. BCrypt, the hashing algorithm used by PHP to hash passwords, is to become more expensive by making it more difficult to crack passwords. With mb_trim, trim is also finally getting a sister function that can work with multi-byte strings.


Links & Literature

[1] https://www.php.net/releases/8.3/en.php

[2] https://wiki.php.net/rfc/readonly_amendments

[3] https://wiki.php.net/rfc/typed_class_constants

[4] https://wiki.php.net/rfc/dynamic_class_constant_fetch

[5] https://wiki.php.net/rfc/marking_overriden_methods

[6] https://wiki.php.net/rfc/json_validate

[7] https://www.php.net/manual/en/function.json-validate.php

[8] https://wiki.php.net/rfc/randomizer_additions#getbytesfromstring

[9] https://www.php.net/manual/en/random-randomizer.getbytesfromstring.php

[10] https://wiki.php.net/rfc/randomizer_additions#getfloat

[11] https://www.php.net/manual/en/random-randomizer.getfloat.php

[12] https://www.php.net/manual/en/features.commandline.options.php

[13] https://www.php.net/releases/8.3/en.php#other_new_things

[14] https://www.php.net/releases/8.3/en.php#deprecations_and_bc_breaks

[15] https://wiki.php.net/rfc#php_84

The post Unlocking PHP 8.3 appeared first on International PHP Conference.

]]>
Serde for PHP 8: How Functional Purity Drives Serde’s Architecture https://phpconference.com/blog/interview-larry-garfield-serde-php-8-library/ Thu, 11 Jan 2024 08:27:45 +0000 https://phpconference.com/?p=85919 Delve into the world of Serde and Crell with Larry Garfield, the PHP expert who created this unique and versatile library. Larry currently works as a staff engineer at LegalZoom but has worked at Platform.sh, written books on PHP, and contributed to the Drupal 8 Web Services initiative to create the modern PHP we're familiar with today. We caught up with Larry to talk about Serde and its supporting libraries. Read on and learn everything you need to know about Serde and Crell.

The post Serde for PHP 8: How Functional Purity Drives Serde’s Architecture appeared first on International PHP Conference.

]]>

IPC-Team: Thank you for taking the time to speak with us today, Larry. Can you introduce yourself for our readers?

Larry Garfield: Hi, I’m Larry Garfield, 20+ year PHP veteran. I’ve worked on a couple of different Free Software projects, and currently work as a Staff Engineer for LegalZoom. I am also a leading member of the PHP Framework Interoperability Group (PHP-FIG).

IPC-Team: Congratulations on the recent release of Serde 1.0.0. How did Serde come about and what was your motivation?

Larry Garfield: Serde came out of a need I had while working for TYPO3, the German Free Software CMS. I wanted a tool to help transition TYPO3 from giant global array blobs for all configuration toward well-typed, explicitly defined objects. Translating arrays into objects is basically a serialization problem, so rather than do something one-off I figured it was a good task for serialization.

I first looked at Symfony Serializer, as TYPO3 already uses a number of Symfony components and it was generally regarded as the most robust option. Unfortunately, after spending multiple weeks trying to coax it into doing what I needed I determined that is just couldn’t. It didn’t have the structure-manipulation features I needed, and its architecture was simply too convoluted to make adding it feasible. That meant I had to build my own.

After some initial experimentation of my own, I looked into Rust’s Serde crate, as it’s generally regarded as the best serializer on the market. Rust, of course, is not the same as PHP, but I was still able to draw a lot of ideas from it. For instance, Crell/Serde is streaming, like Rust’s Serde. It doesn’t have a unified in-memory intermediate representation (necessarily), but there is a fixed set of “low level” types that exporters and importers can map to. (That list is smaller for PHP than for Rust, naturally.)

IPC NEWSLETTER

All news about PHP and web development

 

It took a few months, but I was able to get Crell/Serde to do nearly everything I needed it to for TYPO3. Most especially, I’m very happy with the data restructuring capabilities it has. That is, the serialized form of an object doesn’t have to be precisely the same as the in-memory object. There’s robust rules for automatically changing that structure in controlled ways when serializing and deserializing. For instance, a set of 10 JSON properties can be grouped up into three different object properties, with changed names in PHP to avoid prefixes and such, automatically. That was important for TYPO3, because the old array structures had a lot of legacy debt in their design, and this was an opportunity to clean that up.

Along the way, Serde also spawned two supporting libraries: Crell/fp and Crell/AttributeUtils. The latter is where a lot of Serde’s power lives, in fact, in its ability to power nearly everything through PHP attributes. That functionality is now available to any library to use, not just Serde.

In the end, TYPO3 chose not to pursue the array-to-object transition after all. But since it’s an Open Source organization, the code was already written and could be released. After I left TYPO3, I polished the library up a bit further, added a few more features, and released it.

IPC-Team: Serde shares its name with the Serde framework used with Rust, was that an inspiration? Have the two ever been confused?

Larry Garfield: As noted above, yes, it’s definitely named in honor of Rust Serde and drew from its design. So far the name similarity hasn’t been an issue. I did have someone on Mastodon complain that my name choice was going to hurt SEO, but so far that doesn’t seem to have been an issue. It’s too late to change it anyway. 🙂

“Despite all of its power and flexibility, Serde is pretty fast. The last time I benchmarked it, it was faster than Symfony Serializer on the same tasks, despite having more features and options.”

IPC-Team: What formats does Serde support?

Larry Garfield: As of 1.0.0, Crell/Serde can round-trip (serialize and deserialize) PHP arrays, JSON, YAML, and CSV. It can also serialize to JSON and CSV in a streaming-fashion. I have a working branch on XML support, but that’s considerably more challenging and I suspect XML may be better handled in a different approach than a general purpose serializer.

I would like to add support for TOML, and there has been interest in it, but so far we’ve not found any existing TOML 1.0 parsers for PHP, only for the old 0.4 format. If someone made a good 1.0-compatible library, plugging that into Serde should be pretty simple.

YOU LOVE PHP?

Explore the PHP Core Track

 

Serde is entirely modular, so new formats can be added by third parties easily. That said, I’m happy to colocate supportable formats in Serde itself. Ease of setup and use is a key goal for the project.

IPC-Team: What sets Serde apart from the competition?

Larry Garfield: I think Serde excels in a number of areas.

  • a. As mentioned, the data-restructuring capabilities are beyond anything else in PHP right now, as far as I’m aware. I think it may even be more flexible than Rust Serde in some ways.
  • b. Support for streaming JSON and CSV output. Combined with the ability to read from generators in PHP, that means Serde has effectively no maximum on the size of data it can serialize.
  • c. It’s “batteries included.” Symfony Serializer is a bit tricky to setup if you’re using it outside of the Symfony framework itself. There’s lots of exposed moving parts. Serde can be used by just instantiating one class and using it. It can be configured in more robust ways, but for just getting started it’s trivially easy to use.
  • d. Despite all of its power and flexibility, Serde is pretty fast. The last time I benchmarked it, it was faster than Symfony Serializer on the same tasks, despite having more features and options. If you don’t need Serde’s capabilities than a purpose-build lightweight hydrator would still be faster, but in most cases Serde will be fast enough to just use and move on. It also has natural places to hook in and provide custom serialization for certain objects, which can be purpose-built and faster than the general pipeline.
  • e. “Scopes” support. Symfony Serializer also supports multiple ways of serializing an object through serialization groups, which are very similar. The way Serde ties in attributes, however, gives it even more flexibility, and I am not aware of any other serializer besides Serde and Symfony that have that ability.
  • f. This is more of a personal victory, but Serde is about 99% functionally pure. It follows the functional principles of immutable variables, functionally pure methods, statelessness, etc., even though it’s overall object-oriented. Really holding to that line helped drive the architecture in a very good place, and is one of the reasons Serde is so extensible.

IPC-Team: What are some of the advantages of using Serde for serialization and deserialization in PHP applications?

Larry Garfield: I see Crell/Serde as a good fit any time unstructured data is coming into an application. It is always better to be working with well-structured, well-typed objects than array blobs. Because Serde is so robust and fast, it’s straightforward to “guard” everywhere data is coming into the application (from an HTTP request, config file, database, REST response from another service, etc.) with a deserialization layer that ensures you have well-structured, typed, autocompletable data to work with. That can drastically cut down on the amount of error handling needed elsewhere the application.

 

The same is true when sending requests. Rather than manually build up an array to pass to some API call (as many API bridges expect you to do), you can build up a well-structured object, using all of the good OOP techniques you already know (typed properties, methods, etc.), and then dump that to JSON or a PHP array at the last second before sending it over the wire. That ensures you have the right structured data every time; your PHP code wouldn’t even run otherwise.

IPC-Team: What are some of the Serde’s limitations?

Larry Garfield: As mentioned, XML and TOML support are still pending. I’ve had someone ask about binary formats like protobuf, and I think that could probably be done, but I’ve not tried.

There are some edge cases some users have reported around the data restructuring logic when using “boxed” value objects. For instance, a “Name” class that contains just a string and an “Email” class that contains just a string, both of which are then properties on an object to serialize. That’s only partially supported right now, although I’m working on ways to improve it. Hopefully it will be resolved by the time you read this.

Crell/Serde also supports only objects. It cannot serialize directly from or to arrays or primitives. In practice I don’t think that’s a big issue, as “turning unstructured data into structured objects” is the entire reason it exists.

IPC-Team: How do you recommend getting started with Serde and how can someone get involved with the community?

Larry Garfield: I’m quite proud of the Serde documentation in the project README. It’s long, but very complete and detailed and gradually works up to more and more features. The best way to get started is to read the first section or two, then try playing with it. It’s deliberately easy to just toy around with, and add-in capabilities as you find a use for them.

As far as getting involved in the project itself, as in any Free Software project, file good bug reports, file good feature requests. If you want to try and add a feature, please open an issue first to discuss it. I don’t want someone wasting time on a feature or design that won’t work.

In particular, if someone wants to try writing a formatter for writing to protobuf or other binary formats, I’d love to see what can be done there. I’ve not worked with that format myself so that’s a good place to dig in.

IPC NEWSLETTER

All news about PHP and web development

 

At the moment, Crell/Serde is entirely volunteer-developed by me, since it’s no longer sponsored by TYPO3. Please keep that in mind any time you’re working with this or any Free Software project. Of course, if you are interested, I’m happy to accept sponsorship for prioritizing certain requests.

IPC-Team: What’s on your wishlist for future iterations or updates of Serde?

Larry Garfield: Mainly addressing the limitations mentioned above. TOML support would be good to include. XML may or may not make sense. I like the idea of supporting boxed value objects better. Binary formats would be another good differentiating feature.

One feature in particular I’m exploring is allowing attributes to be read from a non-attribute source. AttributeUtils, which handles all attribute parsing, is also clean enough that plugging in an alternate backend should be easy. If that alternate backend reads data from a YAML file, for instance, using Serde, and deserializes into whatever attribute set a given library is using (such as Serde’s own attributes), that would allow any AttributeUtils-using library to easily support YAML or JSON configuration in addition to in-code attributes, but still resulting in the same metadata objects for a library to use. I’m still working on how to make this work, but I’m pretty sure it is feasible. Stay tuned.

The post Serde for PHP 8: How Functional Purity Drives Serde’s Architecture appeared first on International PHP Conference.

]]>
17 Years in the Life of ElePHPant https://phpconference.com/blog/keynote-17-years-in-the-life-of-elephpant/ Tue, 14 Nov 2023 08:51:52 +0000 https://phpconference.com/?p=85810 In the vast and dynamic world of programming languages, PHP stands out not only for its versatility but also for its unique and beloved mascot – the elePHPant. For 17 years, this charming blue plush toy has been an iconic symbol of the PHP community, capturing the hearts of developers worldwide.

The post 17 Years in the Life of ElePHPant appeared first on International PHP Conference.

]]>

The story of the elePHPant began in Canada, where Damien Seguy, the founder and father of the elePHPant, first brought this adorable creature to life. Little did he know that this creation would become a global ambassador for the PHP language, spreading joy and camaraderie among developers on every continent, including the frosty expanses of Antarctica.

At this year’s International PHP Conference (IPC), Damien Seguy took center stage to share the remarkable journey of the elePHPant. The keynote presentation was a nostalgic trip through the past 17 years, highlighting the elePHPant’s adventures, milestones, and enduring impact on the PHP community.

IPC NEWSLETTER

All news about PHP and web development

 

The elePHPant’s global travels are a testament to the interconnectedness of the PHP community. From North America to Europe, Asia, Africa, Australia, and even the remote corners of Antarctica, the elePHPant has become a cherished companion for PHP developers everywhere. It has been a source of inspiration, a conversation starter at conferences, and a symbol of the shared passion that unites developers across borders.

Beyond its physical presence, the elePHPant has also made its mark in the digital realm. It is a common sight on social media, where developers proudly share photos of their elePHPant companions during meetups, conferences, and coding sessions. The elePHPant’s virtual presence reflects the close-knit and supportive nature of the PHP community.
The IPC keynote offered a glimpse into the evolution of the elePHPant, showcasing the various editions and designs created over the years. Each elePHPant is a unique piece of PHP history, and collectors worldwide treasure them as valuable artifacts.

As the PHP language continues to evolve, so does the legacy of the elePHPant. It remains a symbol of the vibrant and passionate PHP community, which values collaboration, knowledge-sharing, and the joy of coding. The elePHPant’s 17-year journey is a testament to the enduring spirit of PHP developers worldwide. As it continues to travel the globe, it carries the memories and experiences of every coder who has crossed paths with this beloved mascot.

The post 17 Years in the Life of ElePHPant appeared first on International PHP Conference.

]]>
Professional Test Management with TestRail – Part 2 https://phpconference.com/blog/professional-test-management-with-testrail-part2/ Fri, 06 Oct 2023 12:28:46 +0000 https://phpconference.com/?p=85673 The process in a testing team already starts in the leading project phase with an intensive planning of test concepts, optionally directly for the different levels of the V-Modell (component test, integration test, system test, acceptance test).

The post Professional Test Management with TestRail – Part 2 appeared first on International PHP Conference.

]]>

Testing is more than just running the tests! We have already explained this statement in detail in Part 1 of our series.

The simplified flow from “Test Case Management” to “Test Planning”, “Test Execution” up to the “Final Reports” shows the wide spectrum of activities in the QA environment.

So that these things can be carried out in a controllable manner, there are tools such as “TestRail”.

The test management software allows to create a clean and filterable test catalog with detailed instructions for the execution of tests.

IPC NEWSLETTER

All news about PHP and web development

 

Together we have created such a test catalog in part 1, which we now use accordingly for further planning.

Now that we have developed our tests and entered them as optimally as possible in TestRail, it is time to prepare them for execution by means of “Test Runs”.

Creating Test Plans

There are several options available in TestRail for planning. The most basic option is to create a simple test run (“Test Run”), which we can create under the menu item “Test Runs & Results”. This test run will later contain various tests from the catalog, selected either manually or automatically via filtering.

However, if we want a more structured approach, TestRail also offers the possibility to create a test plan. A Test Plan can contain any number of Test Runs, allowing for thematic structuring or subdivision.

Test Plans and Test Runs can be combined in different ways. For example, as in the definition, a Test Run can be a single run with a completed result. A test plan could then contain several runs until finally everything was OK and the feature can be accepted.

Another variant is that a test plan contains different test runs covering diverse topics. This could mean, for example, that one of the test runs might contain all the automated Cypress tests, another for smoke and sanity tests, and another for regression testing or new features. This is often helpful to have a visual representation, but can also be used to assign to different testers on the team. In this case, the test runs would remain open or repeated until everything is ultimately OK.

Before we actually create a test plan, we should look at the “Milestones” section in the main menu item. All test plans or test runs can also be assigned to milestones. These thus provide a rough subdivision, which can be done at your own discretion or in coordination with the project management.

Now we create our test plan and three test runs each for “Cypress Tests”, “Smoke and Sanity” and “Regression Tests”.

When creating a single test run, we have several options for selecting the tests. We can choose to add all tests, only certain manually selected tests, or use dynamic filtering to make the selection.

In case of manual selection, a window opens with an overview of our test catalog. Here we can navigate through our structured sections and select desired tests by simply ticking them. After clicking “OK”, the selected tests are applied to the test run.

EVERYTHING IS CONNECTED TO THE INTERNET

Explore the Web Development Track

 

When using dynamic filtering, we also see a modal. On the right side we have the possibility to specify different filter settings. Depending on how extensive the list is, we need to make sure to click on the “Set Selection” button at the bottom (scrolling may be required). Only then will TestRail highlight the appropriate tests based on our filtering. The rest of the process is the same as for manual selection.

If you now think that this is all TestRail offers us, you are considerably mistaken. TestRail offers us many more useful functions in the editing view of a test plan. The “Configurations” button opens a small window where we can create various groups and configurations. Based on the selected combinations, our prepared test cases will be duplicated and created for each specified configuration. For example, we could create groups for browsers, operating systems and devices. The configurations could then be “Chrome”, “Firefox”, or “Windows 11”, “MAC”, etc. We can then select which combinations we want to test. After we confirm this, we have different test runs for all our combinations, which we can customize or even remove. Of course, it is also possible to assign each Test Run to a different tester in the system.

So with all these features, we have flexible options to find our own customized approach for a project and a way of working.

At the end of the day, it is crucial to have a clear overview of the tests and be able to quickly provide feedback on the current status.

Test Execution

Now we finally get to the execution of our tests. Depending on the strategy and approach, this can be done either during the project, or classically at the end. Combinations are also possible if there are sufficient resources.

To start a run, we simply go to the detail page of the desired test run. On this page we have an efficient overview with statistics, sections and filtering options. A simple master-detail navigation allows to see the list of tests on the left side, and the details of the currently selected test on the right side.

For each test, multiple results can be recorded here. To do this, we simply click on the drop-down menu of the status (e.g. “untested”) in the list or on “Add result” within the details page. We can pre-select anything without consequence, such as “passed”, as a separate window will open anyway where we can adjust the results again. This may seem unexpected at first, but it is easy to learn. Basically, it is up to us which view we want to use to test. The most important thing is to read the steps carefully. However, the modal offers the advantage of marking steps already performed as “passed” to keep track of them, and it also allows us to record times, which can be interesting for planning future test runs.

Once we have captured the result of the test, TestRail does an excellent job of logging. The modal contains not only a comment function, but also fields for build number, version, etc., in addition to the status (Passed, Blocked, Retry, Failed). These can be expanded with additional fields as needed. A particularly interesting area concerns defects. Here we not only have the option to enter reference numbers (i.e. ticket IDs), but you can also create tickets directly in Jira, as long as Jira is connected to TestRail. So if we find a bug in the software, we can create a Jira ticket directly from TestRail, and the ticket ID is automatically linked to the test result in TestRail. This allows QA teams to track the current status of Jira tickets directly in TestRail and see when a feature can be retested, independent of project management and developers. Within Jira, all relevant information from TestRail is displayed in the ticket, and the template used can be edited in TestRail. In this way, developers are also provided with all the necessary information.

IPC NEWSLETTER

All news about PHP and web development

 

Traceability and Reports

TestRail provides a comprehensive range of reporting options to monitor progress and test coverage. You can compare results from different test runs, configurations and milestones. These reports can be automatically generated with a schedule and shared with both internal team members and external stakeholders, including the ability to generate reports as PDFs.

Learning TestRail’s reporting features may take some time, but once the various options are understood, many options are available to customize the reports to meet the team’s unique needs.

In addition to generated reports, TestRail also offers real-time reports. These can be found at the project level, milestone level, test plan level and test run level.

In the area of tracking, TestRail provides the ability to assign external reference IDs. This can be a Jira ticket ID, for example. If one has additionally linked Jira correctly, a tooltip field with information directly from Jira even opens when hovering. This gives you the possibility to assign different tests to a Jira ticket (e.g. Epic). This linking can be used for corresponding evaluations, but also for simple filtering when creating test plans.

TestRail API

TestRail has an extremely comprehensive HTTP-based API, which enables the creation of a wide range of interfaces. Using this API, we can retrieve test cases, create new test results, send attachments, and perform basic tasks such as creating test runs and editing configurations.

TestRail provides its own Github repository with templates for development in PHP, Java, .Net, Ruby and more.

Based on this API, we can now integrate a plugin for our test automation and submit results directly from Cypress to TestRail.

Cypress and TestRail

There are various reasons why test automation is sought. Whether it is due to resource constraints, to avoid repetitive steps, or to secure critical areas of the application that are often error prone.

To begin automation with Cypress, let’s create a Cypress project. Since the focus of this article is on TestRail, we will not go further into the implementation of Cypress tests here. The crucial point is the actual integration of our plugin.

First, we select a test from our test catalog. In collaboration with QA and development team (or Test Automation Engineers), a kick-off is conducted to take a closer look at the desired test and its behavior. After the test is implemented in Cypress, it is reviewed accordingly. If everything fits, we can mark the test as “automated” in TestRail. This will give us a better overview in the future of which tests are automated, and therefore no longer need to be tested manually.

But how do the results from Cypress get into TestRail? Quite simply – via an appropriate plugin based on the TestRail API. We install a compatible plugin like the “Cypress TestRail Integration” [https://github.com/boxblinkracer/cypress-testrail].

The configuration is relatively simple using the “setupNodeEvents” function enabled by Cypress.

e2e: { setupNodeEvents(on, config) { return require('./cypress/plugins/index.js')(on, config) } , }

This file relates our manually created “index.js” file with the actual registration of the plugin. Of course, this step can also be done inline.

const TestRailReporter = require('cypress-testrail');
module.exports = (on, config) => { new TestRailReporter(on, config).register(); return config }

After this is done, there are only two simple steps left. First, we still need a configuration for our TestRail instance, and of course we still need to link our created test to the test that is in TestRail.

Let’s start with the configuration. We have several options to do this. Either we create a “cypress.env.json” file or work directly with environment variables, for example in the CI/CD section.

 

The plugin offers two basic ways to send results to TestRail. It is possible to send the results directly to an existing and prepared test run, or to have new runs created dynamically. The choice of the appropriate approach can vary depending on the team and the project. So this flexibility is given.

The following example shows a JSON file that sends results to a defined Test Run:

{ "testrail": { "domain": "my-company.testrail.io", "username": "myUser", "password": "myPwd", "runId": "R123" } }

After the connection is configured, we just need to map our Cypress test to the appropriate TestRail test. This is done via a simple mapping in the test description (Test Description) using the ID from TestRail. The TestRail ID is visible with the tests and always starts with a “C”. It is also possible to link multiple Test Cases to a single Cypress Test.

it('C123: My Test for TestRail case 123', () => { // ... // ... })
it('C123 C54 C36: My Test for multiple TestRail case IDs', () => { // ... // ... })

That’s all. Now when we start Cypress in “run” mode, we see a hint about our integration and its configuration at the beginning. After a spec file is processed in Cypress, the results of the tests performed in it are finally sent to TestRail.

The integration offers many more options, such as uploading screenshots, adding more metadata and much more.

Conclusion

Testing is more than just running tests. To get the multitude of necessary tasks sorted out, test management tools like “TestRail” help us. TestRail offers a powerful test management solution that covers the entire quality management process, from test case creation to reporting. With features for structuring test catalogs, flexible test plans and comprehensive reporting, it enables efficient test management.

IPC NEWSLETTER

All news about PHP and web development

 

TestRail’s seamless integration with other tools, such as Jira, facilitates collaboration between test and development teams. In addition, the fully comprehensive API enables integration for automation software such as Cypress and Co. among others.

Overall, TestRail provides a comprehensive solution to streamline the QA process and deliver high-quality software products.


Links & Literature

https://www.testrail.com/

https://github.com/gurock/testrail-api

https://github.com/boxblinkracer/cypress-testrail

The post Professional Test Management with TestRail – Part 2 appeared first on International PHP Conference.

]]>
Professional Test Management with TestRail – Part 1 https://phpconference.com/blog/professional-test-management-with-testrail-part1/ Tue, 26 Sep 2023 12:55:42 +0000 https://phpconference.com/?p=85653 "Now just a quick test and we can go live!" Surely most of us have heard this statement before. A professional approach, perfect plans and structured work during the project - and yet this optimistic, yet at the same time naive conclusion in the home stretch.

The post Professional Test Management with TestRail – Part 1 appeared first on International PHP Conference.

]]>

But what is the problem with testing? Not in testing itself, but in the perception that testing can be done quickly and at short notice. However, professional quality management encompasses much more than just testing. It starts at the very beginning of the project, and over its duration provides answers to questions such as the coverage of planned tests, the progress of the project, the number of known defects, and much more.

IPC NEWSLETTER

All news about PHP and web development

 

Tools are available to us for exactly these tasks, so-called test management applications. In this article, we will take a look at the application “TestRail”, and learn what possibilities such software offers us, and how we can use it.

However, before we get into the details, it is important to consider what is actually meant by the term “testing” and what tasks are associated with it.

What does professional testing mean?

What does testing actually mean? According to the guidelines of the ISTQB (International Software Testing Qualifications Board), testing includes:

The process consisting of all lifecycle activities (both static and dynamic) that deal with planning, preparation, and evaluation of a software product and associated deliverables.

This definition is undoubtedly based on a broad focus on all activities, which means that testing encompasses much more than simply running tests.

If we take a closer look at the start of a new project, it is common knowledge that project management, technical lead devs and other stakeholders work with customers and stakeholders to create project plans, divide them into work packages and release them for development. What is often neglected, however, is the role of testers in this crucial planning phase of the project.

BE ON THE SAFE SIDE!

Explore the Quality & Security Track

 

In the area of quality management or quality assurance, one or more test concepts are developed in the professional approach at the beginning of the project. These test concepts sometimes deal with seemingly simple questions, which, however, play a central role in the development of test cases.

What are the goals of our testing? Do we want to build trust in the software, or just minimize risks? Evaluate conformance, or simply prove the impact of defects? What documents do we create for our tests? What forms the basis of our tests (concepts, specifications, instructions, functions of the predecessor software)? Which test environments are available, when will they be implemented, and which approaches and methods do we use to develop test cases?

For those who have now had their “aha” moment, it should be added that such test concepts can indeed be elaborated for each test level of the V-Modell. For example, in the area of component testing, we usually strive for things like unit tests, code coverage and whitebox testing, while in system testing, blackbox testing methods are increasingly used for test case development (equivalence classes, decision tables, etc.). In addition, system testing may already be validating instead of just verifying things.
 >Validation deals with making sense of the result (does the feature really solve the problem), while verification refers to checking requirements (does it work according to the requirement).

Due to the considerable amount of information and the work steps according to ISTQB (yes, that was by far not all), I would like to divide these, into four simple areas:

  • Test Case Management
  • Test Planning
  • Test execution
  • Final reports

This clear structure makes it possible to manage the complexity of the testing process and to ensure that all necessary steps are carried out carefully.

Testing in a Software Project

To facilitate the later use of TestRail, let’s now take a rough look at the flow of a project, using the points simplified above.

After the test base (requirements, concepts, screenshots, etc.) has been defined, various test concepts have been generated, and appropriate kick-off meetings have taken place, it is the responsibility of the testers to develop appropriate test cases. These tests essentially provide step-by-step guidance on how to perform them, whether on a purely written or even visual basis.

Those who have done this before know that there are few templates and limitations in this regard. These range from simple functional tests, such as technical API queries, to extensive end-to-end scenarios, such as a complete checkout process in an e-commerce system, including payment (in test mode).

A key factor in test design is recognizing that quantity does not necessarily mean quality. It makes little sense to have 1000 tests that cannot possibly be run manually over and over again due to scarce capacity. It makes much more sense to create fewer tests, but with a large number of implicit tests so that they automatically test additional peripheral aspects of the actual case, if possible.

Now that a list of tests has been created, it is of course useful if it can be filtered. Therefore, the carefully compiled test catalog is additionally categorized. The so-called “Smoke & Sanity” tests comprise a small number of tests that are so critical that they should be tested with every release. Simple regression tests, in turn, provide an overview of optionally testable scenarios that can be rerun as needed (suspected sideeffects, etc.).

The list of these categories can vary, as there is no official standard and they can vary from company to company. Ultimately, the most important thing is the ability to easily filter based on requirements. Of course, there are many other interesting filtering options, such as a reference Jira ticket ID for the Epic covered in the test, or possibly specific areas of the software such as “Account”, the “Checkout” or the “Listing” in e-commerce projects.

Now that the test catalog has been generated, the question is whether we should directly test it in full. The answer is yes and no! Here it depends on what is crucial for the project management and the stakeholders, i.e. what kind of report they ultimately need.

Therefore, we can create test plans that include either all tests, or only a subset of them. Usually, for example, before a release for a plugin (typically with semantic versioning v1.x.y, …) all “smoke & sanity” tests are tested, as well as some selected tests for new features and old features. Although it would of course be ideal to run all tests, this is unfortunately often unrealistic, depending on team size and time pressure. A relaunch project that is created from scratch should of course be fully tested before final acceptance. However, for a more economical way of working (shift-left), it is possible to plan various test plans for the already completed areas of the software earlier. Thus, tests for the “account” area of an online store could be started before the “checkout” area is testable. This gives an earlier result and also provides a cheaper way to fix bugs (the earlier in development the cheaper).
However, this is still a gamble, as side effects could still occur due to integration errors at the end of the project. Thus, additional testing at the end is always advisable.

Planning test executions thus involves selecting and compiling tests from our test catalog, taking into account various factors such as their importance, significance, priority and feasibility.

After the test plans have been created, and the work packages have been put into a testable state, now the perhaps simplest, but extremely prominent step in the QA process starts – the execution of the tests. This step can be quite straightforward, depending on the quality of the prepared tests, but it always requires a step-by-step approach. (A small tip: in addition to running these tests, freer and exploratory testing is also recommended to uncover additional paths and bugs).

 

During test execution, however, it is critical to log results as accurately as possible. This includes capturing information such as screen sizes, devices used, browsers used, taking screenshots and recording the ticket ID of the work package, and more. Such logging is necessary for tracking and makes troubleshooting much easier for developers.

After the tests have been run, it’s time to create the final reports. Stakeholders and other involved parties naturally want to know what the status of the project is. Among other things, they are interested in the test coverage, the number of critical issues found, and whether they might suggest a premature go-live of the application. The creation of reports is therefore an essential step in the QA process, as they form the basis for decisions and consequences for the entire project.

Fortunately, in order not to lose track of all these tasks, tools and applications are available. Although in theory simple documents based on Word and Excel can suffice, professional test management applications provide a much more efficient and organized workspace for the entire team.

A leading tool in this field is “TestRail”.

Test Management with TestRail

TestRail, developed by Frankfurt-based Gurock Software, is characterized by its specialization in highly efficient and comprehensive solutions for QA teams. Its offerings range from comprehensive test management capabilities to the creation of detailed test plans, precise execution of tests, meticulous logging and extensive reporting. And for those who want to go even further, TestRail offers an extensive API that can be used to develop custom integrations to further customize and optimize the QA process.

When visiting the TestRail website, it quickly becomes clear that there is more than just software on offer here. TestRail’s content team continuously publishes interesting articles on the subject of testing, which offer real added value thanks to their practical and technically appealing content.

TestRail itself can be used either as a cloud solution or via an on-premise installation. The cloud variant offers a comprehensive solution at quite affordable prices, around EUR 380 per user per year. For those who want additional functions, the Enterprise Cloud version is available for around EUR 780 per user per year. This includes single sign-on, extended access rights, version control of tests and much more.

IPC NEWSLETTER

All news about PHP and web development

 

The installation on own servers is more expensive, about 7,700 EUR to 15,620 EUR per year, but already includes a large contingent of available users and can be a suitable solution especially for larger teams and companies.

Once you have chosen a version, such as the cloud solution, it can be used after a short registration.

Create a project

Let’s start by creating a new project in TestRail. In addition to the project title and access rights, there are settings related to Defects and References, which will be discussed in more detail later in this article. Through these two functions, it is possible to link applications such as Jira, with TestRail and get a smooth navigation, as well as a preview of linked Defect tickets or even Epic tickets (references).

Probably the most interesting and important area concerns the type of project we are creating. Here, TestRail offers us three different options for structuring our test catalog.

The user-friendly “Single Repository” option allows us to create a simple and flexible test catalog that can be divided into sections and subsections.

The “Single Repository with Baseline Support” option allows us to keep the simplicity of the first model, but create different branches and versions of test cases. This is especially useful for teams that need to test different product versions simultaneously.

The third variant offers the possibility to use different test catalogs to organize the tests. Test catalogs can be used for functional areas or modules of the application. This type of project is more suitable for teams that need a stricter division of the different areas. A consequence of this is that test executions can only ever include tests from a single test catalog.

For our project launch and greater flexibility, we choose the “Single Repository” type.

Create tests

After the project is created, we are taken to an overview page. Here, at a later stage of the project phase, we will find more useful information.

Now it is time to create our first test. To do this, we open the “Test Cases” section in the project navigation.

On this page we see the currently still empty test catalog. Our task now is to create an appropriate number of tests that are optimally structured and filterable for us.

TestRail offers a variety of options for organizing test cases. In addition to filterable properties, we can also create a hierarchical structure by using sections. There are no hard and fast rules on how this should be done.

We can use sections for different areas of the application like “Checkout” or “Account”, or create them for individual features. The author often finds it helpful to use sections to break down the application by area or feature, as these can be used later as a guide when creating test plans.

Regardless of whether we decide to use sections or not, the next step is to create our first test.

Looking at the input screen, we notice that a lot of emphasis has been placed on relevant information here.

We have the option to define various properties, such as the type of test (smoke, regression, etc.), priority, automation type and much more. If these options are not enough, we can easily create and add new fields through the administration.

When we define the instructions of a test, we have the option to use one of several templates. Besides the variant with a free text field, we also have a template for step-by-step instructions. With the latter, we can define any number of steps with sequences and expected intermediate results. This not only offers the advantage of clear instructions, but also allows us to specify exact results for each step. This way, we can later immediately see from which step an error occurred.

YOU LOVE PHP?

Explore the PHP Core Track

 

For testers managing large projects, there is also the option of outsourcing certain steps to separate central tests, such as the “login process on a website”, and then reusing them in different tests.

Thanks to the extensive editing options for tests in TestRail, there are no limitations when it comes to defining test cases efficiently and precisely.

Today we learned about the different processes of a testing team in a software project, and started using TestRail to set up our project.

With the tests we created together and the resulting filterable test catalog, we now have a perfect basis to plan the actual testing of our application.

In the next part we will use this test catalog to create test plans as well as to execute the tests.

We will also take a look at reporting, traceability, and Cypress integrations via the available TestRail API to complete our flow.

The post Professional Test Management with TestRail – Part 1 appeared first on International PHP Conference.

]]>
PHPUnit 10 – All you need to know about the latest version https://phpconference.com/blog/phpunit-10-all-you-need-to-know-about-the-latest-version/ Tue, 01 Aug 2023 14:22:31 +0000 https://phpconference.com/?p=85551 PHPUnit 10 is the most important release in PHPUnit's now 23-year history. It is to PHPUnit what PHP 7 was to PHP: a massive cleanup and modernization that lays the foundation for future development. Let's take a look inside at what specific changes PHPUnit 10 has brought and will bring in the coming months.

The post PHPUnit 10 – All you need to know about the latest version appeared first on International PHP Conference.

]]>

PHPUnit 10 should have been released on February 5, 2021, the first Friday in February 2021. It would have followed the tradition of PHPUnit 6, 7, 8 and 9 of being released on the first Friday of February each year, before most people in Germany had their first cup of coffee. PHPUnit 10 was then released on February 3, 2023, the first Friday in February 2023, two years late.

There are reasons for the delay. One of the most substantial may be a pandemic that has affected us all and permanently changed the lives and work habits of many people. Since April 2017, PHPUnit Code Sprints were held every six months, which the author attended with great pleasure and regularity. On one hand, these sprints gave the opportunity to rediscover and rediscover the functionality of PHPUnit together with Sebastian Bergmann, friends and acquaintances of PHPUnit, and on the other hand also to contribute to the development of PHPUnit in a concentrated way.

In September 2019, the last Code Sprint for the time being took place in Mannheim. In October 2019, Sebastian Bergmann, Arne Blankerts, Stefan Priebsch, Ewout Pieter den Ouden and the author participated in the EU-FOSSA Cyber Security Hackathon, organized by the European Union, to work on critical infrastructure for the European Union in parallel with other developers. It was there that the idea for one of the biggest changes in PHPUnit came up, the new event system that would find its way into PHPUnit 10.

However, COVID-19 meant that events such as the PHPUnit Code Sprint, official and unofficial hackathons, PHP user groups and conferences could no longer take place in the usual way. These events were cancelled completely or were only held online. The working habits of many of us, who had previously been able to engage in constructive exchange with developers on-site at customer locations, for example, and were now only able to do so online, also underwent lasting changes as a result of the pandemic.

IPC NEWSLETTER

All news about PHP and web development

 

These changes also affected the work on PHPUnit. However, this does not mean that nothing has been achieved since the release of PHPUnit 9 in February 2020. On the contrary, PHPUnit 10, as already indicated, brings major changes, especially beneath the surface.

PHPUnit 10.0.0

PHPUnit 10.0.0 was released on February 3, 2023. Immediately after the release, a number of releases followed in quick succession until the end of March, fixing bugs and flaws and responding to feedback from developers. PHPUnit 10.0.19 was released on March 27, 2023.

PHPUnit 10 requires PHP 8.1 or higher. Developers using versions older than PHP 8.1 must use older versions of PHPUnit, such as PHPUnit 9 (requires PHP 7.3 or higher) or PHPUnit 8 (requires PHP 7.2 or higher). For PHPUnit 10, the documentation has been completely revised. In the following we want to take a look at the new functionalities.

Event system

The TestListener and Hook system available in PHPUnit 9 provide interfaces for extending PHPUnit. Both interfaces have serious drawbacks.

The TestListener system required third-party vendors to create a class that implemented a TestListener interface. As a result, third-party vendors must implement every method of this interface, even if that method is not required. To facilitate implementation, PHPUnit provided a TestListenerDefaultImplementation trait.

The TestListener system allowed third-party developers to manipulate the factually modifiable objects within their implementation to alter test results. The best-known example of this might be an implementation that, when executing tests, checks in which environment those tests are executed and thus, for example, marks and outputs failed tests as successful in a CI environment.

The Hook system allowed third-party developers to create a class that only needs to implement the interfaces that are relevant to the extension. In addition, only scalars and no mutable objects were now passed to these methods. So this system improved PHPUnit’s extension interface: it removed the ability to influence test results, but also required more work for third-party vendors to provide similar functionality.

YOU LOVE PHP?

Explore the PHP Core Track

 

In PHPUnit 10, both systems have now been replaced with an event system. Almost everything in PHPUnit is now an event. All output, both on the console and in log files, is based on events. The development of this event system was led by Arne Blankerts and the author. As mentioned at the beginning, the development of the event system was started at the EU-FOSSA Cyber Security Hackathon in October 2019 together with Stefan Priebsch and Ewout Pieter den Ouden.

In the process, PHPUnit’s internal code, which previously used the TestListener system and ResultPrinter classes, was completely reworked (and in some cases rewritten) to use the event system instead. Due to the self-imposed constraint of using events for all output, both console and log, many confusing and/or missing events were discovered early on.

The new event system is not only superior to the earlier approaches TestListener and Hook. The work on the event system had a ripple effect on the entire PHPUnit codebase. A lot of technical debt was finally paid off. Finding the right places to emit the right events brought to light countless previously hidden inconsistencies and problems.

For example, a concrete event required a canonical and immutable representation of the configuration. As a result, the code that loads the XML configuration could be improved. Likewise, the code that processes the command line options and arguments could be improved. And most importantly, the code that combines these sources into the actual configuration has been significantly improved. When this actual configuration was created, large parts of the command line program could be implemented much more easily. This allowed other parts to be cleaned up, and so on and so forth.

The new event system allows read-only access and now has a large number of event objects (currently 67) that can be created during PHPUnit execution and also processed by extensions to PHPUnit. The event objects that are then passed to these extensions, as well as any value objects that are combined into such an event object, are immutable and contain a variety of information that may be of interest to PHPUnit extensions. For example, all of these objects contain information about runtime, current and maximum memory usage, and much more.

IPC NEWSLETTER

All news about PHP and web development

 

PHPUnit 10 and its new event system require third-party developers to make significant changes to their extensions and tools for PHPUnit. The PHPUnit development team regrets that this may require significant effort, but at the same time is confident that in the long run the benefits of the new event system will outweigh the costs.

The PHPUnit development team has received promising feedback in this regard. Back in October 2021, Nuno Maduro reported that migrating Pest (an alternative and popular tool in the Laravel scene for running tests based on PHPUnit) from TestListener to the new event system had been a “great” experience. Discussions that the PHPUnit development team had with Filippo Tessarotto were then instrumental in ensuring that solutions like ParaTest could be updated to work with PHPUnit 10.

Separation of test results and test problems

In PHPUnit 10, a clear separation was introduced between the result of a test (failed, failed, incomplete, skipped or passed) and the problems of a test (considered risky, triggered a warning, etc.).

In PHPUnit 9, the internal error handling routine optionally converted errors of types E_DEPRECATED, E_NOTICE, E_WARNING, E_USER_DEPRECATED, E_USER_NOTICE, E_USER_WARNING, etc. into exceptions. These exceptions aborted the execution of a test and caused PHPUnit to consider the test as failed.

In PHPUnit 10, the internal error handling routine no longer converts these errors to exceptions. Therefore, the execution of a test is no longer aborted when, for example, an E_USER_NOTICE is raised. Consequently, such a test is no longer considered to have errors.

The example in Listing 1 raises an E_USER_NOTICE during the execution of a test.

```php
<?php
 
declare(strict_types=1);
 
use PHPUnit\Framework;
 
final class ExampleTest extends Framework\TestCase
{
  public function testSomething(): void
  {
    $example = new Example();
 
    self::assertTrue($example->doSomething());
  }
 
  public function testSomethingElse(): void
  {
    $example = new Example();
     self::assertFalse($example->doSomething());
  }
}
```
 
```php
<?php
 
declare(strict_types=1);
 
final class Example
{
  public function doSomething(): bool
  {
    // ...
 
    trigger_error('message', E_USER_NOTICE);
 
    // ...
 
    return false;
  }
}
```

In PHPUnit 9, E_USER_NOTICE was converted to an exception and the execution of the test was aborted (Listing 2).

```
➜ php phpunit-9.6.phar --verbose ExampleTest.php
PHPUnit 9.6.0 by Sebastian Bergmann and contributors.
 
Runtime:       PHP 8.2.2
 
EE                                 2 / 2 (100%)
 
Time: 00:00.015, Memory: 6.00 MB
 
There were 2 errors:
 
1) ExampleTest::testSomething
message
 
/path/to/Example.php:11
/path/to/ExampleTest.php:13
 
2) ExampleTest::testSomethingElse
message
 
/path/to/Example.php:11
/path/to/ExampleTest.php:20
 
ERRORS!
Tests: 2, Assertions: 0, Errors: 2.
```

This means that using PHP functionality that triggers E_DEPRECATED, E_NOTICE, E_STRICT, or E_WARNING, or calling code that triggers E_USER_DEPRECATED, E_USER_NOTICE, or E_USER_WARNING can no longer hide an error in the executed code. In the example shown above, the assertion line is never reached when PHPUnit 9 is used and the code under test triggers E_USER_NOTICE.

 

In PHPUnit 10, the E_USER_NOTICE is not converted to an exception and therefore the execution of the test is not aborted (Listing 3). By default, PHPUnit 10 does not display details about deprecations, notices, or warnings. In order for these details to be displayed, the command line options –display-deprecations, –display-notices and –display-warnings (or their counterparts in the XML configuration file) must be used.

```
PHPUnit 10.0.0 by Sebastian Bergmann and contributors.
 
Runtime:       PHP 8.2.2
 
FN                                       2 / 2 (100%)
 
Time: 00:00.015, Memory: 6.00 MB
 
There was 1 failure:
 
1) ExampleTest::testSomething
Failed asserting that false is true.
 
/path/to/ExampleTest.php:13
 
--
 
There were 2 notices:
 
1) ExampleTest::testSomething
message
 
/path/to/ExampleTest.php:13
 
2) ExampleTest::testSomethingElse
message
 
/path/to/ExampleTest.php:20
 
FAILURES!
Tests: 2, Assertions: 2, Failures: 1, Notices: 2.
```

Metadata with attributes

In PHPUnit 10, metadata can be specified for test classes and test methods as well as for tested code units with attributes. Listing 4 shows the specification of metadata with annotations as known from PHPUnit 9 and older versions of PHPUnit. Listing 5 shows the specification of metadata with attributes as it is possible in PHPUnit 10.

```php
<?php 
 
declare(strict_types=1);
 
namespace App\Test;
 
use App\Example;
use PHPUnit\Framework;
 
/**
 * @covers \App\Example 
 */
final class ExampleTest extends TestCase
{
  /**
   * @dataProvider provideData
   */
  public function testSomething(
    string $expected, 
    string $input,
  ): void {
    $example = new Example();
 
    $actual = $example->doSomething($input);
 
    self::assertSame($expected, $actual);
  }
 
  public static function provideData(): array
  {
    return [
      [
        'foo', 
        'bar',
      ],
    ];
  }
}
```
```php
<?php 
 
declare(strict_types=1);
 
namespace App\Test;
 
use App\Example;
use PHPUnit\Framework;
 
#[Framework\Attributes\CoversClass(Example::class)]
final class ExampleTest extends TestCase
{
  #[Framework\Attributes\DataProvider('provideData')]
  public function testSomething(
    string $expected, 
    string $input,
  ): void {
    $example = new Example();
    
    $actual = $example->doSomething($input);
 
    self::assertSame($expected, $actual);
  }
 
  public static function provideData(): array
  {
    return [
      [
        'foo', 
        'bar',
      ],
    ];
  }
}
```

In PHPUnit 10, both annotations and attributes are supported. PHPUnit 10 first searches for attributes for a code unit. If no attributes are found, the system falls back on any existing annotations.

Currently there are no concrete plans if and when the support for annotations will be marked as deprecated and removed.

New assertions

A number of assertions have been added in PHPUnit 10. These include:

  • assertIsList()
  • assertStringEqualsStringIgnoringLineEndings()
  • assertStringContainsStringIgnoringLineEndings()

New command line options

A number of command line options have been added in PHPUnit 10. These include:

  • –display-deprecations, enables the display of deprecations
  • –display-errors, enables the display of errors
  • –display-incomplete, enables the display of incomplete tests
  • –display-notices, activates the display of notices
  • –display-skipped, activates the display of skipped tests
  • –display-warnings, enables the display of warnings
  • –no-extensions, allows to disable all extensions for PHPUnit
  • –no-output, allows to disable all output from PHPUnit
  • –no-progress, allows to disable the progress indicator
  • –no-results, allows to disable the results display

 

Removed functionalities

In PHPUnit 10, all functionalities that were marked as deprecated in PHPUnit 9 have been removed. Developers:inside who receive warnings about using PHPUnit deprecated functionality when running their tests with PHPUnit 9 will not be able to upgrade to PHPUnit 10 until they have stopped using that deprecated functionality.

Removal of PHPDBG and Xdebug 2 support

In PHPUnit 10, support for PHPDBG and Xdebug 2 for collecting code coverage has been removed. PCOV or Xdebug 3 are required to collect code coverage.

Removal of integration with Prophecy

In PHPUnit 10, the integration with Prophecy for creating test doubles has been removed. Developers who use libraries such as Prophecy or Mockery in their tests to create test doubles will need to rewrite their tests for PHPUnit 10 or wait for Prophecy and Mockery to support PHPUnit 10. At this time, neither Prophecy nor Mockery support PHPUnit 10.

Removal of assertions

In PHPUnit 10, a number of assertions have been removed, some of which were replaced in PHPUnit 9 with newly added alternatives. These assertions include:

  • assertNotIsReadable(), replaced by assertFileNotIsReadable()
  • assertNotIsWritable(), replaced by assertFileNotIsWritable()
  • assertDirectoryNotExists(), replaced by assertDirectoryDoesNotExist()
  • assertDirectoryNotIsReadable(), replaced by assertDirectoryIsNotReadable()
  • assertDirectoryNotIsWritable(), replaced by assertDirectoryIsNotWritable()
  • assertFileNotExists(), replaced by assertFileDoesNotExist()
  • assertFileNotIsReadable(), replaced by assertFileIsNotReadable()
  • assertFileNotIsWritable(), replaced by assertFileIsNotWritable()
  • assertRegExp(), replaced by assertMatchesRegularExpression()
  • assertNotRegExp(), replaced by assertDoesNotMatchRegularExpression()
  • assertEqualXMLStructure(), removed without replacement

Removal of matchers

In PHPUnit 10, the at() matcher has been removed. This matcher previously allowed setting expectations on test doubles that methods would be called in a specific order.

The withConsecutive() matcher has also been removed. This matcher previously allowed expectations to be placed on Test Doubles that methods would be called in a certain order with certain arguments.

Both matchers previously allowed code to be written that introduced temporal coupling. Removing these matchers emphasizes that code that introduces temporal coupling is not timely and should be avoided.

Removal of command line options

In PHPUnit 10, a number of command line options have been removed. These include:

  • –debug, allowed debug output to be enabled while running tests.
  • –extensions, allowed configuration of extensions for PHPUnit
  • –printer, allowed configuration of a class to output test results
  • –repeat, allowed repeated execution of tests
  • –verbose, allowed configuring more detailed output while running tests

Removal of the TestListener and Hook systems

In PHPUnit 10, both the TestListener and Hook systems have been removed as interfaces for third-party extensions to PHPUnit. Developers:inside who rely on functionality from extensions for PHPUnit 9 will not be able to use PHPUnit 10 until those extensions have been migrated to PHPUnit 10’s new event system or they have found alternative extensions that are compatible with PHPUnit 10.

IPC NEWSLETTER

All news about PHP and web development

 

PHPUnit 10.1.0

PHPUnit 10.1.0 was released on April 14, 2023. This release was followed by only a smaller number of patch releases. PHPUnit 10.1.3 was released on May 11, 2023. Below are the new, changed as well as deprecated functionalities of PHPUnit 10.1.

New assertions

New assertions have been added in PHPUnit 10.1.0. These include:

  • assertObjectHasProperty()
  • assertObjectNotHasProperty()

New attributes

New attributes have been added in PHPUnit 10.1.0. These attributes include:

  • IgnoreClassForCodeCoverage
  • IgnoreMethodForCodeCoverage
  • IgnoreFunctionForCodeCoverage

New source element in XML configuration

In PHPUnit 10.1.0, a new <source> element has been added to the XML configuration. This element allows to configure a list of directories and files to be considered as source code of a project by PHPUnit. In addition, this element allows to configure in detail how to handle notices, deprecations and warnings that arise from running the source code.

Accordingly, there is now a new Source object that represents the configuration of the <source> element. The <source> element replaces the <coverage> element, which has now been marked as deprecated.

New methods for creating test doubles

In PHPUnit 10.1.0, a TestCase::createConfiguredStub() method has been introduced, analogous to the TestCase::createConfiguredMock() method that has been present since PHPUnit 9. This method allows to create a test double that has configured methods and return values, but causes a test to fail when called by other, non-configured methods.

New method for configuration by extensions

In PHPUnit 10.1.2, a method has been added to the extension facade that allows an extension to PHPUnit to indicate that the extension intends to replace the entire output of PHPUnit.

Suppression of deprecations, notices and warnings

In PHPUnit 10.1.0, E_USER_* errors suppressed by the @ operator are ignored again.

coverage element in XML configuration

In PHPUnit 10.1.0, the coverage element of the XML configuration was marked as deprecated. This element is replaced by the newly added source element.

Methods for creating test doubles

In PHPUnit 10.1.0, methods used to create and configure test doubles were marked as deprecated. These include:

  • MockBuilder::enableProxyingToOriginalMethods()
  • MockBuilder::disableProxyingToOriginalMethods()
  • MockBuilder::allowMockingUnknownTypes()
  • MockBuilder::disallowMockingUnknownTypes()
  • MockBuilder::enableArgumentCloning()
  • MockBuilder::disableArgumentCloning()
  • MockBuilder::addMethods()
  • MockBuilder::getMockForAbstractClass()
  • MockBuilder::getMockForTrait()
  • TestCase::createTestProxy()
  • TestCase::getMockForAbstractClass()
  • TestCase::getMockForTrait()
  • TestCase::getMockFromWsdl()
  • TestCase::getObjectForTrait()

These methods are expected to be removed in PHPUnit 12.

Methods to access aspects of configured source code

In PHPUnit 10.1.0, with the introduction of the <source> element in the XML configuration, methods to access aspects of the configured source code were marked as deprecated. In their place, alternative and newly introduced methods of the source object can be used. These methods include:

  • Configuration::hasNonEmptyListOfFilesToBeIncludedInCodeCoverageReport(), replaced by Source::notEmpty()
  • Configuration::coverageIncludeDirectories(), replaced by Source::includeDirectories()
  • Configuration::coverageIncludeFiles(), replaced by Source::includeFiles()
  • Configuration::coverageExcludeDirectories(), replaced by Source::excludeDirectories()
  • Configuration::coverageExcludeFiles(), replaced by Source::excludeFiles()

PHPUnit 10.2.0

PHPUnit 10.2.0 was released on June 2, 2023. PHPUnit 10.2.2 was released on June 11, 2023. Below you can see the new functionalities and those marked as deprecated.

Optional suppression of deprecations, notices and warnings

In PHPUnit 10.2.0, enhancements have been made to allow optional suppression of deprecations, notices, and warnings.

Methods to access aspects of the configured source code

In PHPUnit 10.2.0, methods for accessing aspects of configured source code have been marked as deprecated. Instead, alternative and newly introduced methods of the source object can be used. These methods include:

  • Configuration::restrictDeprecations(), replaced by Source::restrictDeprecations()
  • Configuration::restrictNotices(), replaced by Source::restrictNotices()
  • Configuration::restrictWarnings(), replaced by Source::restrictWarnings()

PHPUnit 10.3.0

PHPUnit 10.3.0 is scheduled for release on August 4, 2023. The following is planned for it.

XML format for log files

For PHPUnit 10.3.0 it is roughly planned to release a new XML format for log files. The XML format for log files used by PHPUnit so far has existed for about 20 years and is based on the XML format used by JUnit. This XML format has the disadvantage that it is not under the control of either JUnit or PHPUnit. In addition, there is no official schema in XSD format that can be used to check the validity of log files.

However, the goal of a new XML format is not to produce another standard. Rather, the goal of a PHPUnit proprietary XML format is to be able to accommodate more information. Thanks to the new event system of PHPUnit 10, there is now significantly more information available, which unfortunately cannot be represented with the XML format currently used by PHPUnit 10.

Further planned releases

On October 6, 2023 PHPUnit 10.4.0 and on December 1, 2023 PHPUnit 10.5.0 will be released.

The post PHPUnit 10 – All you need to know about the latest version appeared first on International PHP Conference.

]]>
The PHPUnuhi Framework at a Glance https://phpconference.com/blog/the-phpunuhi-framework-at-a-glance/ Fri, 14 Jul 2023 12:31:37 +0000 https://phpconference.com/?p=85494 While pipelines, tests, and automation positively influence many aspects of our daily work, there are still topics where manual work makes developers yawn. The platform-independent open source framework PHPUnuhi is trying to revamp the topic of “translations”, enhancing it with possibilities in the areas of CI/CD, storage formats, and even OpenAI.

The post The PHPUnuhi Framework at a Glance appeared first on International PHP Conference.

]]>

Who hasn’t had the following situation? You’re working on an application, a plug-in, or something similar and suddenly discover that translations in some language are missing. Depending on the software’s application area, this can either make the user smile slightly, or it can have far-reaching consequences. But one thing is always the same. The non-functional requirement “trust in the software” is harmed.

It’s a pity that mistakes happen here again and again. Meanwhile, there are tools like PHPUnit, PHPStan, and many others that help create high-quality applications. But what about translation? Wouldn’t it be wonderful if the pull request pipeline failed right when a colleague forgot a translation? Or even if states arise where individual localizations are out of sync and have a different, invalid structure? This is exactly PHPUnuhi’s approach. But let’s start at the beginning.

Among other things, I’m the developer of the official Mollie payment plug-ins for Shopware. These plug-ins serve as central and optionally installable modules in online stores, based on Shopware 5 or Shopware 6 [1]. Merchants can install these plug-ins in no time and offer a wide range of payment methods from Mollie in their Shopware store [2]. Anyone who’s ever had to do anything in this area knows that payment is a serious sector. In short, it’s about money. There aren’t many excuses when a mistake happens. It just has to work!

IPC NEWSLETTER

All news about PHP and web development

 

Because of this, we’ve already spent a lot of time building pipelines. These range from the usual unit tests, static analysis, to many E2E tests based on Cypress. But despite these precautions, it happens again and again that translations for multilingual plug-ins are forgotten. Every developer and tester knows it’s difficult to verify all areas in all languages, especially as a small team. But for the product’s end user, it simply looks embarrassing and untested.

So one day I decided to integrate a small script that would do at least some rough checks. Lo and behold, soon after, the first pipeline failed when I forgot a translation.

From then on, there were always one or two ideas for further tests and features. And so I decided to completely rebuild the previously small script from the Mollie plug-ins and publish it in combination with many other requirements as a platform-independent open source framework. After all, the world only benefits when more developers get something out of it.

But before we begin our first application, why “unuhi”? Quite simply, it means “translate” or “translation” in Hawaiian. Do I speak Hawaiian? No.

First steps

Before we get into the possibilities and basic concepts of PHPUnuhi, I’d like to start directly with its usage. After just a few steps, the tests are ready to be integrated into a pipeline. Let’s imagine we’re developers

Because after just a few steps, the tests are ready to be integrated into a pipeline. Let’s imagine we are developers:inside of a software that has several translations based on JSON files. These are already finished and are located in the project or source code of the application.

YOU LOVE PHP?

Explore the PHP Core Track

 

You can easily install PHPUnuhi with Composer. The recommendation is to do so as a dev dependency:

composer require --dev boxblinkracer/phpunuhi

After installation, all that’s needed is to create an XML-based configuration, and the framework is ready for use.

In our configuration (phpunuhi.xml) we define one or more translation sets. These sets are freely definable bundles of localizations. A localization is then mapped via a file, section or other, depending on the format. One can either create one large set or several topic-based sets, depending on the platform and application requirements (Listing 1).

<phpunuhi>
  <translations>
    <set name="App">
      <format>
        <json/>
      </format>
      <locales>
        <locale name="de">./snippets/de.json</locale>
        <locale name="en">./snippets/en.json</locale>
      </locales>
    </set>
  </translations>
</phpunuhi>

With that, we’re already finished with basic installation and configuration. Now we can start our tests and check how the translations are doing.

php vendor/bin/phpunuhi validate

Who could see it coming? Unfortunately, the tests fail. We received information that a wrong structure was found, and that a translation exists but doesn’t contain a value.

PHPUnuhi works for individual translation with unique keys in a localization. In our case, there’s an issue with the key card.btnCancel in the German as well as English version (Fig. 1).

(Editor’s note: This article was originally published in German and has been translated into English. Therefore, the translation example in PHPUnuhi is working from German to English.)

Fig. 1: Example of error output during validation

To solve this problem, we have the option of manually entering the missing entry in the de.json file, or we can use a prepared command to automatically repair the structures:

php vendor/bin/phpunuhi fix:structure

This will give us a uniform structure in both files. Now we can run the following command and automatically correct our empty translation too.

php vendor/bin/phpunuhi translate --service=googleweb

With Google’s support, our empty entry has now been automatically translated and entered into the corresponding JSON file. Besides Google [3], DeepL [4], and OpenAI [5] can also be used for this. But before we delve deeper into this topic, it’s time to get to know the basic framework better.

 

PHPUnuhi’s basic structure

PHPUnuhi exists in the combination of different abstraction layers. This makes it possible to guarantee basic functionality while still being flexible in choosing formats and services. What does this mean?

In the current version, there are three basic pillars: storage formats, exchange formats, and translation services. These are in constant interaction and can be combined with each other as you wish (Fig. 2).

Fig. 2: Basic structure of the PHPUnuhi abstraction layers

Storage formats

Storage formats define how data is persisted. Translations can be stored in JSON files, INI files, PHP (array) files, or directly in a database (Shopware 6). Therefore, the focus of Storages is on reading, converting, and writing translations.

Different formats can also be equipped with individual settings. For instance, the JSON and PHP formats have the option of specifying the number of indentations and alphabetical sorting. In the case of Shopware 6 Storage, the database entries entity can (and must be) specified. Listing 2 shows two examples for the INI and Shopware 6 formats.

<set name="Storefront">
  <format>
    <ini indent="4" sort="true"/>
  </format>
  ...
</set>
 
<set name="Products">
  <format>
    <shopware6 entity="product"/>
  </format>
  ...
</set>

While simpler formats like JSON, INI, and PHP are based on simple data structures, there are also formats that divide translations into groups, like Shopware 6. The Shopware 6 format directly connects to the database, so a corresponding connection to the database must be established first. The parameters needed for this connection can be stored easily with an env area in the XML configuration or specified directly via env export (Listing 3).

<phpunuhi>
  <php>
    <env name="DB_HOST" value="127.0.0.1"/>
    <env name="DB_PORT" value="3306"/>
    <env name="DB_USER" value=""/>
    <env name="DB_PASSWD" value=""/>
    <env name="DB_DBNAME" value="shopware"/>
  </php>
</phpunuhi>

But back to our groups. Shopware 6 works as a storage with entities in the database. These are things like products, payment types, currencies, and more. Here, translations don’t refer to the general names of properties, but to product data or user data in the system.

IPC NEWSLETTER

All news about PHP and web development

 

This means that each entry of these entities (for instance, a single product) has multiple properties (name, description, etc.) that can be translated into different languages. The resulting additional dimension in our matrix is solved in PHPUnuhi using groups. Each entity (each product) receives a unique group ID with all associated translations. Table 1 shows an example of this.

Key Group DE EN
name product-1 PHP Magazin PHP Magazine
description product-1 ein tolles Heft a great magazine
name product-2 Entwickler Magazin Developer Magazine
description product-2 auch ein tolles Heft also a great magazine

Table 1: Example of generated translation structures based on groups

Considering that products in particular can have many properties, this list can get very long. There’s also a high chance that only a part of the properties should even be translated at all. This is where another storage format feature comes into play: the filters.

With include or exclude filters, you can include or exclude certain translations. Wildcard placeholders can also be used for this. The configuration in Listing 4 removes the custom_fields property and all properties beginning with meta_ from the translation list.

<set>
  ...
  <filter>
    <exclude>
      <key>custom_fields</key>
      <key>meta_*</key>
    </exclude>
  </filter>
  ...
</set>

Exchange formats

This type of format or abstraction layer is used for exchange with other systems. It focuses on data preparation suitable for the format and the storage (export), as well as reading certain file types for conversion back into PHPUnuhi compatible translations (import).

Of course, the classic CSV cannot be missing. This supports the export and import of simple and extended storage formats (groups).

In other words, no matter what your storage format is, you will receive a CSV file. If the storage you use supports writing translations, then the CSV file can be automatically imported again.

 

Besides CSV, there’s  also an integrated HTML format. This format solves several problems at once. The export creates a single index.hml file that can be easily opened in any browser. This file contains an HTML-based spreadsheet with integrated editing options and storage of the adjustments. CSS and JavaScript are directly integrated. This is a great plug-and-play approach, especially for colleagues who tend to send back .xls files instead of the needed CSV files.

However, more than just local processing is possible. There is also another variant that’s just as exciting for staging systems, for instance. Since the export path can be selected individually, it’s possible to store this file in a public directory on the web server. This way, a certain URL on the staging system can output an overview of all currently available translations. Thanks to the integration form, these can also be directly edited. The resulting output can be downloaded and imported into the software with the import command for the next iteration. To add even more automation, generating this export can either run the pipeline’s post-deployment job, or simply in a fixed interval via cronjob or something similar.

The HTML format also supports storage formats with groups. In this case, grouped translations are displayed visually so that translation can be done intuitively. Figures 3 and 4 show examples of HTML and CSV exports.

Fig. 3: Example of HTML export with integrated form

Fig. 4: Example of a CSV export with three languages

Translation services

The last abstraction area in the current version is connecting to different translation providers. Currently, it supports Google, DeepL, and OpenAI. This makes it possible for missing translations to be automatically added with an integrated translate command. Thanks to the framework’s basic concept, this means that all kinds of storage formats that support writing translations can also be combined with translation services at the same time.

PHPUnuhi only needs an existing value in another language as a basis for this automation. If this is the case, the translation can be requested from the external service. The result is automatically persisted with configured storage.

Further individual configurations are provided when integrating different providers. For instance, in DeepL, you can use the  –deepl-formal argument to specify if the translation should be formal or informal. This affects the German salutations “du” and “Sie”, for instance.

The googleweb service can be used for a quick start. This sends a simple query to the familiar Google website that we all know:

php vendor/bin/phpunuhi translate --service=googleweb

Although this isn’t recommended for continuous mass queries, it usually works quite well and can be used purposefully.

If you want to take a more professional approach, you can also connect to Google Cloud Translate and, as previously mentioned, to DeepL, which is becoming increasingly successful. For AI enthusiasts, there is now also an OpenAI integration. It currently uses the text-davinci-003 model, which is not perfect yet but it already delivers surprisingly good results. OpenAI can be used with the following command along with the specification of a corresponding service including the API key:

php vendor/bin/phpunuhi translate --service=openai --openai-key=(my-api-key)

What functions are available?

Now that we understand the basic framework and some of its possibilities, we can take a closer look at the framework’s extended functionality.

YOU LOVE PHP?

Explore the PHP Core Track

 

With the help of a few commands, you can perform much more than simple translation testing. State analysis, listings, reporting, imports and exports offer a multitude of possibilities for your project.

Translation coverage

With the status command, you can output coverage in the area of translations. Values are provided on the level of localizations, translation sets, and as an overall view:

php vendor/bin/phpunuhi status

Validation

One of the framework’s core functions is the validate command. As I previously mentioned, you can test translations for completeness. But the command also has some other useful features.

A problem that occurs frequently during further software development is an unplanned variation in translation key spelling. While working with code styles, little consideration is given to the fact that text modules should also have a conforming structure. Using case style validation, you can maintain the consistency of keys over the project’s lifecycle. PHPUnuhi offers a list of potential options, like the well-known variants Pascal, Camel, Kebab, and more.

Therefore, a translation set can consist of several potential case styles. If no styles are specified, the whole test is skipped. The actual test based on this list works for simple storage formats and for multi-nested storages like JSON and PHP. Here, all hierarchy levels are checked for the specified styles.

Optionally, you can also fix different styles on certain levels. For a nested structure like JSON, Pascal Case can be defined at the root level, while Kebab Case must be used at all other levels (Listing 5).

<set>
  <styles>
    <style level="0">pascal</style>
    <style>kebab</style>
  </styles>
</set>

Friends of JUnit reports will also get their money’s worth with PHPUnuhi. With the report-format argument, you can generate a JUnit compliant XML file:

php vendor/bin/phpunuhi validate --report-format=junit --report-output=junit.xml

This contains all tests performed with corresponding error reports and can be used in a familiar way and processed by the machine.

Fix structure

With large file-based translations like JSON and INI, manually fixing different structures can be extremely time-consuming, even more so if they span several hierarchies or levels. This can be automated and simplified using the integrated fix:structure command..

In the process, PHPUnuhi verifies individual structures and ensures that each localization also receives all of the entries. As a little bonus, the storage formats also rewrite values with previously configured indentations or even in alphabetical order, depending on the type:

php vendor/bin/phpunuhi fix:structure

I should mention that this is only a matter of repairing structures. The values are stored with an empty string, so a validation still fails.

Export/Import

Exports and imports provide a simple variant for working with external agencies and systems. Using a simple export command, you can quickly create files that can be passed to systems or people by selecting a format:

php vendor/bin/phpunuhi export ... --format=csv
php vendor/bin/phpunuhi export ... --format=html

If no special translation set is specified, then all sets will be exported to separate files. However, as with many commands, you can also select a set by argument and have only this set processed. After the customized results have been returned, they can be imported back into the system with an import command:

php vendor/bin/phpunuhi import --set=storefront --file=storefront.csv

It should be noted here that version control using Git or something similar is strongly recommended, especially when working with file-based storage formats. For storage formats using a database, an appropriate back-up should also be made before the import.

Translate

The translate command is one of the more exciting features along with the validate command. As already described in the “Translation Services” section, an external service can be used to automatically translate values. A service is simply selected with the service argument.

Now PHPUnuhi goes through all existing entries and tries to translate empty translations with the specified service. The value of a found language serves as a basis. Only one value can exist. If this isn’t the case, then it cannot be translated.

php vendor/bin/phpunuhi translate --service=googleweb
php vendor/bin/phpunuhi translate --service=deepl --deepl-key=xyz

If you want to completely retranslate an existing localization, you can use the force argument for this. You must specify the locale that will be retranslated.

php vendor/bin/phpunuhi translate --service=googleweb --force=en-GB

But with automated services, it’s important to always remember that translations should be generated depending on the application’s context. Automatically generated results fit most cases, but manual, human verification is still recommended.

IPC NEWSLETTER

All news about PHP and web development

 

Conclusion

As a platform-independent open source framework, PHPUnuhi tries to simplify translation work for developers and teams, while also increasing the possibilities of quality assurance measures. With its simple configuration options, it can be quickly integrated into existing projects and used efficiently after just a few minutes. PHPUnuhi’s possibilities are far from exhausted. So if you feel like joining or just have some ideas, you can participate via the GitHub repository [6].


Links & Literatur

[1] https://www.shopware.com

[2] https://www.mollie.com

[3] https://translate.google.com

[4] https://www.deepl.com

[5] https://openai.com

[6] https://github.com/boxblinkracer/phpunuhi

The post The PHPUnuhi Framework at a Glance appeared first on International PHP Conference.

]]>