Web Development - International PHP Conference https://phpconference.com/blog/web-development/ IPC 2025 Wed, 11 Jun 2025 11:33:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Securing Web Applications with WebAuthn and Passkeys https://phpconference.com/blog/webauthn-passkeys-secure-authentication/ Wed, 29 Jan 2025 12:51:03 +0000 https://phpconference.com/?p=107100 WebAuthn and passkeys offer a secure, password-free alternative to traditional authentication, enhancing user convenience and safety. This guide will show you how to implement these modern technologies in your web applications using practical examples with PHP and JavaScript, providing a seamless and reliable login experience for your users.

The post Securing Web Applications with WebAuthn and Passkeys appeared first on International PHP Conference.

]]>
Passwords have long been the weak link in securing online accounts, vulnerable to phishing, brute force attacks, and data breaches. Enter passwordless authentication—a modern solution powered by the Web Authentication (WebAuthn) API and public key cryptography. Supported by the FIDO Alliance, this approach eliminates shared secrets like passwords and replaces them with passkeys: cryptographic credentials securely stored on devices.

Using strong authenticators such as platform authenticators, security keys, or biometric features like Touch ID, passkeys offer both robust security and seamless usability. By ensuring private keys never leave the user’s device, this method not only resists phishing attacks but also streamlines the authentication flow for a better user experience.

In this article, we explore how passkeys and the WebAuthn protocol work together to revolutionize authentication for apps and websites. Additionally, we provide a hands-on guide to implementing passkeys using JavaScript and PHP. Whether you’re a developer building passwordless authentication flows or enhancing existing systems, this hands-on approach will help you get started with ease.

Background

Think of how many websites out there that you’ve created an account for. Likely more than the number of unique passwords you can memorize and maintain. Many of us have deferred to password managers that are now built into web browsers or standalone services to keep track of them all. While you may be confident that you’ve created a strong password for a website, there’s no way to know how that website stores your password on their end. Sites have been compromised and passwords leaked and with passwords being reused, that leak can lead to further damage on other sites that use the same password. Two form authentication has sprung up to help, but even that has its flaws, like if someone was able to redirect your phone number and is inconvenient sometimes for the end user (let me go get my phone in the other room).

Those are the challenges as an end user, but you’re a web developer who’s tasked with making sure your web application is secure and convenient for your end users. It can be daunting to implement a user authentication system for your website with the necessary features to handle forgotten passwords, two form authentication, and more. Surely, there’s a more efficient solution! This is where the WebAuthn protocol and passkeys come into play. WebAuthn currently has a 98% reach for global web browsers, so it’s a great time to implement as you can be assured of your work’s support.

This article aims to give you a high level understanding of how this system works and how you can implement it for your web application, avoiding some of the more technical details. After we’re done, you should be able to implement a rough user account creation and login to get used to how it works and then be able to dive deeper for any more specific use case scenarios.

IPC NEWSLETTER

All news about PHP and web development

 

The 30,000 Foot View

Let’s start with a super high level overview. For the applications I’ve developed, I use a PHP framework (CodeIgniter) with JavaScript code on the client side. When someone wants to create an account, they provide a username (ideally an email address), which gets sent to the PHP backend to create a credential creation request. The backend replies and its response is used to create the credentials, which the browser and user’s device handle. You would then send the credentials back to the PHP backend to verify and save in the database. Later, when it’s time to log in, they could simply click a login (with Passkey) button. This will send the request to the PHP backend to prepare the request and then its response is used to request credentials associated with the website from the browser and user’s device. Once the user completes that process, the response is sent back to the PHP backend for verification and if successful, logging in.

Depending on the user’s ecosystem (i.e. Apple, Google, Microsoft, etc), created credentials are securely saved on that device and synchronized with the user’s system account. So for example, if the user was using Safari on an iMac to create the account, the credentials are saved in Apple’s Passwords application and synchronized with the user’s iCloud account. This ensures the credentials are available to the user regardless of what device they are using. So if your user goes home and uses their iPad to access your web application, they would be able to simply log in with the credentials they created earlier on the iMac.

Creating the Passkey

Let’s dive into the details. On the backend, I like to use the lbuchs/WebAuthn library found on GitHub. It simplifies and packages a lot of the processes.

The first part of the implementation is when a user wants to create an account for your site. On our site, we create a guest user account for any new anonymous session and associate its ID # with that session. Then whenever the user wants to mark an item as a favorite, add something to a shopping cart, create an account, or some other action, we have a “user” already registered. When the user wants to create an account with a passkey, we can then use that ID # to store into the credential for easier retrieval later. If your application doesn’t already have user accounts set up, you could just create a GUID associated with the username/email address and then later when you create the user record, use that GUID. Just keep in mind that if you’re storing things in a relational database that users could have multiple passkeys; the ideal setup for that setup is to have a user table and a passkey table with a user_id foreign key in the passkey table to tie back to the user.

When the user fills out their username and submits the create account form, we first send a challenge request to the server with their username. I use an underlying Javascript fetch call to send that request to the server and the server uses the WebAuthn library to create the challenge and save it to the session and send it back to the client, who will use it to generate the credential in the browser.

The Javascript looks something like this:

   let userEmail = document.getElementById('email').value;
   let form_data = new FormData();
   form_data.append('email', userEmail);
   let response = await fetch('/backend-signup-pre.php', {
     method: 'POST',
     body: form_data
   });
   let data = await response.json();

This sends the POST request to the backend-signup-pre.php script which looks something like this:

   /**
    * Simple backend for the web application.
    */
   require_once __DIR__ . '/../vendor/autoload.php';
   use lbuchs\WebAuthn\WebAuthn;
   $domain = "lndo.site";
   $webauthn = new WebAuthn("Simple Passkey App", $domain);
   $email = $_POST['email'];
   $_SESSION['unique_id'] = bin2hex(random_bytes(32));
   $response = $webauthn->getCreateArgs(\hex2bin($_SESSION['unique_id']), $email, $email);
   $_SESSION['challenge'] = ($webauthn->getChallenge())->getBinaryString();
   echo json_encode($response);

The WebAuthn constructor takes two arguments, the name and the domain of your application. These will be baked into the credential to display to the user and restrict what sites it can be applied to. Your domain can be as specific as you’d like, where you could use the TLD to have it work across any subdomains, or have it subdomain specific.

You can see that this code creates a unique_id. If you have the user ID, then this is where you’d use it instead. This also gets baked into the credential, so you want to make it unique to the user and later when the user is logging in and you’re given this unique ID, you’ll be able to find the associated user.

Next you call getCreateArgs with the information you have. A lot of options you can be used to scope the created credential, but I’m just using the defaults here. You will want to store the generated challenge in the session and then return the response to the browser.

A lot of the underlying data is in binary, so it’s helpful to have support methods to convert data back and forth. You see the bin2hex and hex2bin calls in the PHP example above. On the Javascript side, I found these helper functions to convert array buffers to base64 and back.

   // Helper functions.
   var helper = {
     // array buffer to base64
     atb: b => {
       let u = new Uint8Array(b), s = "";
       for (let i = 0; i < u.byteLength; i++) { s += String.fromCharCode(u[i]); }
       return btoa(s);
     },
     // base64 to array buffer
     bta: o => {
       let pre = "=?BINARY?B?", suf = "?=";
       for (let k in o) {
         if (typeof o[k] == "string") {
           let s = o[k];
           if (s.substring(0, pre.length) == pre && s.substring(s.length - suf.length) == suf) {
             let b = window.atob(s.substring(pre.length, s.length - suf.length)),
               u = new Uint8Array(b.length);
             for (let i = 0; i < b.length; i++) { u[i] = b.charCodeAt(i); }
             o[k] = u.buffer;
           }
         } else { helper.bta(o[k]); }
       }
     }
   };

Picking back up on the Javascript client side, we want to use the helper.bta function to convert the credential request from the server to what the WebAuthn API needs:

   let data = await response.json();
   helper.bta(data);
   let credential = await navigator.credentials.create(data);

This will trigger a request in the user’s browser that looks something like this (Firefox on Mac):

Passkey Trigger

If the user clicks Continue, the OS creates a public and private key pair and provides the credential information back to the browser, which you then pass onto the server to store and associate with the user. The Javascript code looks something like this:

   try {
     let credential = await navigator.credentials.create(data);
     let credential_data = {
       client: credential.response.clientDataJSON ? helper.atb(credential.response.clientDataJSON) : null,
       attest: credential.response.attestationObject ? helper.atb(credential.response.attestationObject) : null
     };
     form_data.append('credential', JSON.stringify(credential_data));
     let response = await fetch('/backend-signup.php', {
       method: 'POST',
       body: form_data
     });
   }
   catch (e) {
     // This is when the user cancels the registration or if the registration fails.
   }

The credential_data object is created with the necessary information from the credential and using the atb helper method to convert the array buffer to a base64-encoded string. Then it’s JSON-encoded and passed onto the server. Now when the server gets it, there’s a lot (19!) of verification steps in the official specification and the PHP WebAuthn library takes care of that for you, throwing its own WebAuthnException if anything fails.

   try {
     $credential = $webauthn->processCreate(
       $client_data,
       $attestation_data,
       $_SESSION['challenge']
     );
     // If you got here, the passkey was created successfully and is valid.
     // Let's create the user account.
     $user_id = create_user($_SESSION['email']);
     // Let's create the passkey.
     $nickname = $_SERVER['HTTP_USER_AGENT'] . ' - ' . $_SESSION['email'];
     $passkey = create_passkey($user_id, $credential, $nickname);
     // Let's clean up the session.
     session_unset();
     // Let's save the user id to the session, logging them in.
     $_SESSION['user_id'] = $user_id;
     session_write_close();
     // Let's inform the client that the passkey was created successfully.
     echo json_encode(['success' => 'Passkey created successfully']);
   }
   catch (WebAuthnException $e) {
     echo json_encode(['error' => $e->getMessage()]);
     exit;
   }

Remember in this use case, the user is signing up for the website and creating a passkey at the same time. So we create the user and then use its user ID # to create the passkey and store it in the session. If you already have the user ID # (say it’s a known user who wants to add a passkey to their account), then you skip that step. The passkey table uses this schema:

   CREATE TABLE `passkey` (
     `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
     `user_id` mediumint(8) unsigned NOT NULL,
     `unique_id` varchar(16) NOT NULL DEFAULT '',
     `nickname` varchar(255) NOT NULL DEFAULT '',
     `credential_id` varchar(100) NOT NULL,
     `public_key` varchar(255) NOT NULL,
     `created_at` bigint(20) unsigned NOT NULL,
     `modified_at` bigint(20) unsigned NOT NULL,
     PRIMARY KEY (`id`)
   );

The user_id comes from the inserted (or passed along) user ID #. The unique_id comes from the backend-signup-pre script, which creates a random hex code and saves it to the session as well as putting it into the credential request. Later on, when the user is logging in with their passkey, you’ll be able to retrieve this unique_id and use it to query the passkey table to find the matching passkey and use it to verify the request and log in the associated user. The nickname is used to provide a user-friendly label for the passkey, with their User Agent information as well as the email address they specified. The crendential_id and public_key come from the returned object from the processCreate call. This is an example of what a record looks like in the table:

mysql> select * from passkey \G
*************************** 1. row ***************************
        id: 1
   user_id: 2
unique_id: bae1434f279f6d4d
  nickname: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:133.0) Gecko/20100101 Firefox/133.0 - [email protected]
credential_id: 060a5f046bcbd1b9170b2187728cfacc4a5d26c5
   public_key: -----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEw3zwXXCIKWMzqNIlbx1L/iO134Uy
9CyrdjfVGK1f7+ktqCxlwaQwaEyTJTIpi60qpsIghFOXGV/k5HcjWLX7eA==
-----END PUBLIC KEY-----
   created_at: 1734127652
  modified_at: 1734127652
1 row in set (0.00 sec)

So now you can associate the web session with the user ID and associate that user with the rest of their session with your site.

BE ON THE SAFE SIDE!

Explore the Quality & Security Track

 

Logging In With a Passkey

But what about later when the session has expired and they come back ready to resume their account work? This is where the login functionality comes into play, and from a super high level, it’s much like the signup process. You send a request to the backend to generate a login challenge and use its response to request site credentials from the user’s browser/OS and then feed that back to the server to validate and upon validation, log them in, associating their user ID with the rest of their session.

First, to send the login challenge, it’s a simple GET request:

   let response = await fetch('/backend-login-pre.php');

And on the backend, we use the WebAuthn library to create a credential request that looks like this:

   $domain = "lndo.site";
   $webauthn = new WebAuthn("Simple Passkey App", $domain);
   $args = $webauthn->getGetArgs();
   $_SESSION['challenge'] = ($webauthn->getChallenge())->getBinaryString();
   echo json_encode($args);

Then back on the frontend, we use our helper method to convert the data from bas64 to an array buffer and feed it into the browser’s get credential method:

   let data = await response.json();
   helper.bta(data);
   let credential = await navigator.credentials.get(data);

That call will trigger the interaction between the browser and the OS to retrieve the saved passkey, which looks something like this (Firefox on Mac):

Save Passkey option

The credential that comes back uses array buffers, so needs to be converted to base64 and then sent to the server for validation.

   let credential_data = {
     id: credential.rawId ? helper.atb(credential.rawId) : null,
     client: credential.response.clientDataJSON ? helper.atb(credential.response.clientDataJSON) : null,
     auth: credential.response.authenticatorData ? helper.atb(credential.response.authenticatorData) : null,
     sig: credential.response.signature ? helper.atb(credential.response.signature) : null,
     user: credential.response.userHandle ? helper.atb(credential.response.userHandle) : null
   };
   let form_data = new FormData();
   form_data.append('credential', JSON.stringify(credential_data));
   response = await fetch('/backend-login.php', {
     method: 'POST',
     body: form_data
   });

On the server side, the data comes in base64 encoded and needs to be decoded, which turns into binary. Some of those values can be passed directly into the WebAuthn library’s processGet method, but you can use bin2hex to convert the credential ID and userHandle into data that matches up with the initial signup request.

   $crendential_data = json_decode($_POST['credential'], true);
   $credential_id = bin2hex(base64_decode($crendential_data['id']));
   $unique_id = bin2hex(base64_decode($crendential_data['user']));

Now you can query your passkey table for a passkey that matches that credential_id and unique_id. Then you can call WebAuthn’s processGet method with other data from the request (the client, auth, and sig), the passkey (the public_key), and the challenge created from the previous step.

   // This is a database query that returns the matching row.
   $passkey = get_passkey($credential_id, $unique_id);
   $client = base64_decode($crendential_data['client']);
   $auth = base64_decode($crendential_data['auth']);
   $sig = base64_decode($crendential_data['sig']);
   $valid = $webauthn->processGet(
     $client,
     $auth,
     $sig,
     $passkey['public_key'],
     $_SESSION['challenge']
   );

If there was a matching passkey and if the processGet call returned true, then you have a valid authentication request and you can log in the user.

   if ($valid) {
     unset($_SESSION['challenge']);
     // Let's save the user id to the session, logging them in.
     $_SESSION['user_id'] = $passkey['user_id'];
     echo json_encode(['result' => 'success']);
   }
   else {
     echo json_encode(['result' => 'invalid']);
   }

Then on the client side, you can look at the response and if it was successful, redirect the user to the logged-in page, which can now rely on the user ID to personalize the page.

   data = await response.json();
   if (data.result == 'success') {
     window.location.href = '/dashboard.php';
   }

So there you go! This high-level overview will get you started and running. From there, you can delve deeper and customize to fit your specific requirements. I’ve published the complete code and working example on Github and if you have any questions, you can reach out to me via email or use the Q&A section in that Github repo.

IPC NEWSLETTER

All news about PHP and web development

 

In Conclusion

One of the strengths of this system is that it’s not susceptible to phishing attacks. Credentials are bound to the domain where they were created, so if the user ends up clicking on a malicious link that takes them to a site that looks similar to a legitimate site, any request for a passkey from the malicious site will not match up to the legitimate site’s passkey.

Unfortunately, while WebAuthn is widely available, its adoption has not taken off to be mainstream. While some sites have passkey support, they also run other login methods like OAuth and even the basic password. Passkeys are still a foreign concept to most users, but this is a great time to learn and adopt as a web developer to be prepared for when it hopefully becomes more mainstream. Once users see how easy it is to create an account and log in with WebAuthn, they will love going to sites that support it.

The post Securing Web Applications with WebAuthn and Passkeys appeared first on International PHP Conference.

]]>
17 Years in the Life of ElePHPant https://phpconference.com/blog/keynote-17-years-in-the-life-of-elephpant/ Tue, 14 Nov 2023 08:51:52 +0000 https://phpconference.com/?p=85810 In the vast and dynamic world of programming languages, PHP stands out not only for its versatility but also for its unique and beloved mascot – the elePHPant. For 17 years, this charming blue plush toy has been an iconic symbol of the PHP community, capturing the hearts of developers worldwide.

The post 17 Years in the Life of ElePHPant appeared first on International PHP Conference.

]]>

The story of the elePHPant began in Canada, where Damien Seguy, the founder and father of the elePHPant, first brought this adorable creature to life. Little did he know that this creation would become a global ambassador for the PHP language, spreading joy and camaraderie among developers on every continent, including the frosty expanses of Antarctica.

At this year’s International PHP Conference (IPC), Damien Seguy took center stage to share the remarkable journey of the elePHPant. The keynote presentation was a nostalgic trip through the past 17 years, highlighting the elePHPant’s adventures, milestones, and enduring impact on the PHP community.

IPC NEWSLETTER

All news about PHP and web development

 

The elePHPant’s global travels are a testament to the interconnectedness of the PHP community. From North America to Europe, Asia, Africa, Australia, and even the remote corners of Antarctica, the elePHPant has become a cherished companion for PHP developers everywhere. It has been a source of inspiration, a conversation starter at conferences, and a symbol of the shared passion that unites developers across borders.

Beyond its physical presence, the elePHPant has also made its mark in the digital realm. It is a common sight on social media, where developers proudly share photos of their elePHPant companions during meetups, conferences, and coding sessions. The elePHPant’s virtual presence reflects the close-knit and supportive nature of the PHP community.
The IPC keynote offered a glimpse into the evolution of the elePHPant, showcasing the various editions and designs created over the years. Each elePHPant is a unique piece of PHP history, and collectors worldwide treasure them as valuable artifacts.

As the PHP language continues to evolve, so does the legacy of the elePHPant. It remains a symbol of the vibrant and passionate PHP community, which values collaboration, knowledge-sharing, and the joy of coding. The elePHPant’s 17-year journey is a testament to the enduring spirit of PHP developers worldwide. As it continues to travel the globe, it carries the memories and experiences of every coder who has crossed paths with this beloved mascot.

The post 17 Years in the Life of ElePHPant appeared first on International PHP Conference.

]]>
Asynchronous Programming in PHP https://phpconference.com/blog/asynchronous-programming-in-php/ Wed, 03 May 2023 12:01:15 +0000 https://phpconference.com/?p=85231 When starting this article I wanted to write about quite a lot of things and quite a lot of concepts. However, trying to explain just the fundamental blocks of what asynchronous programming is, I quickly hit the character limit I had and was faced with a choice. I had to decide between going into details of the A’s and B’s or give an eagle’s eye perspective of what is out there in the async world. I chose the former.

The post Asynchronous Programming in PHP appeared first on International PHP Conference.

]]>

We will cover a very basic, naive and simplistic take on what asynchronous programming is like. However, I do believe that the example we explore will give the reader a good enough picture of the building blocks of a powerful and complex technique.

Enjoy!

A service for fetching news

Imagine we work in a startup! The startup wants to build this really cool new service where users input a topic into a search field and they get a bunch of news collected from the best online news sites there are. We are the back-end engineering team and we are tasked with building the core of this fantastic new product – the news aggregator. Luckily for us, all of the on-line news agencies which we will be querying provide nice APIs. All we need to do is for each requested topic to make a call to each of the APIs, collect and format the data so it’s readable by our front-end and send it to the client. The front-end team takes care of displaying it to the user. As with any startup, hitting the market super fast is of crucial importance, so we create the simplest possible script and release our new product. Below is the script of our engine.

 format_europe($europe_news),
    'asia_news' => format_asia($asia_news),
    'africa_news' => format_africa($africa_news)
  ];

  echo json_encode($formatted);

This is as simple as it gets! We give a big “Thank you” to the creators of PHP for making the wonderful file_get_contents() function which drives our API communications and we launch our first version.

YOU LOVE PHP?

Explore the PHP Core Track

 

Our product proves to be useful and the number of clients using it starts to increase from day to day. As our business expands and so does the demand for news from The Americas and from some other countries. Our engine is easy to expand, so we add news from the respective news services in a matter of minutes. However, with each additional news service, our aggregator gets slower and slower.

A couple of months later our first competitor appears on the market. They provide the exact same product, only it’s blazingly fast. We now have to quickly come up with a way to drastically improve our response time. We try upgrading our servers, scaling horizontally with more machines, paying for a faster Internet connection, but still we don’t get even close to the incredible performance of our competitor. We are in trouble and we need to figure out what to do!

The Synchronous nature of PHP

Most of you have probably already noticed what is going on in our “engine” and why adding more news sites makes things slower and slower. Whenever we make a call to a news service in our script, we wait for the call to complete before we make the next call. The more services we add, the more we have to wait. This is because the built-in tools that PHP provides us with are in their nature designed for a synchronous programming flow. This means that operations are done in a strict order and each operation we start must first end before the next one starts. This makes the programming experience nice, as it is really easy to follow and to reason about the flow. Also, most of the time a synchronous flow fits perfectly with our goals. However, in this particular example, the synchronous flow of our program is what in fact slows it down. Downloading data from external services is a slow operation and we have a bunch of downloads. However, nothing in our program requires the downloads to be done sequentially. If we could do the downloads concurrently, this would drastically improve the overall speed of our service.

A little bit about I/O operations

Before we continue, let’s talk a little about what happens when we work with any input/output operations. Whether we are working with a local file or talking to a device in our computer or communicating over a network, pretty much the flow is the same. It goes something like this.

When sending/writing data…

  • There is some sort of memory which acts as an output buffer. It may be allocated in the RAM or it may be memory on the device we are talking to. In any case, this output buffer is limited in size.
  • We write some of the data we want to send to the output buffer.
  • We wait for the data in the output buffer to get sent/written to the device with which we are communicating.
  • Once this is done, we check if there is more data to send/write. If there is, we go to 2. If not, we go back to whatever we were doing immediately before we requested the output operation (we return).

When we receive data a similar process occurs.

  • There is an input buffer. It also is limited in size.
  • We make a request to read some data.
  • We wait while the data is being read and placed into the input buffer.
  • Once a chunk of data is available, we append its contents in our own memory (in a variable probably).
  • If we expect more data to be received, we go to 3. Otherwise we return the read data to the procedure which requested and carry on from where we left off with it.

Notice that in each of the flows there is a point in which we wait. The waiting point is also in a loop, so we wait multiple times, accumulating waiting time. And because output and input operations are super-slow compared to the working speed of our CPU, waiting is what the CPU ends up spending most of its time doing. Needless to say, it doesn’t matter how fast our CPU or PHP engine is when all they’re doing is waiting for other slow things to finish.

Lucky for us, there is something we can do.

IPC NEWSLETTER

All news about PHP and web development

 

The above processes describe what we call blocking I/O operations. We call them blocking, because when we send or receive data the flow of the rest of the program blocks until the operation is finished. However, we are not in fact required to wait for the finish. When we write to the buffer we can just write some data and instead of waiting for it to be sent, we can just do something else and come back to write some more data later. Similarly, when we read from an input buffer, we can just get whatever data there is in it and continue doing something else. At a later point we can revisit the input buffer and get some more data if there is any available. I/O operations which allow us to do that are called non-blocking. If we start using non-blocking instead of blocking operations we can achieve the concurrency we are after.

Concurrently downloading files

At this point it is a good idea that our team looks  into the existing tools for concurrent  asynchronous programming with PHP like ReactPHP and AMPHP. However, our team is imaginary and is in the lead role of a Proof-of-Concept article, so they are going to take the crooked path and try to reinvent the wheel.

Now that we know what are blocking and non-blocking I/O operations, we can actually start making progress. Currently when we are fetching data from news services we have a flow like the following:

  • Get all the data from service 1
  • Get all the data from service 2
  • Get all the data from service 3
  • .
  • .
  • .
  • Get all the data from service n
  • Instead, the flow we want to have would look something like the following:
  • Get a little bit of data from service 1
  • Get a little bit of data from service 2
  • Get a little bit of data from service 3
  • Get a little bit of data from service n
  • Get a little bit of data from service 1
  • Get a little bit of data from service 3
  • Get a little bit of data from service 2
  • We have collected all the data
  • In order to achieve this, we first need to get rid of file_get_contents().

Reimplementing file_get_contents()

The () function is a blocking one. As such we need to replace it with a non-blocking version. We will start by re-implementing its current behavior and then we will gradually refactor towards our goal.

Below is our drop-in replacement for file_get_contents().

function fetchUrl(string $url) {
    $host = parse_url($url)['host'];
    $fp = @stream_socket_client("tcp://$host:80", $errno, $errstr, 30);
    if (!$fp) {
        throw new Exception($errstr);
    }
    stream_set_blocking($fp, false);
    fwrite($fp, "GET / HTTP/1.1\r\nHost: $url\r\nAccept: */*\r\n\r\n");

    $content = '';
    while (!feof($fp)) {
        $bytes = fgets($fp, 2048);
        $content .= $bytes;
    }
    return $content;
}

Let’s break down what is happening:

  • We open a TCP socket to the server we want to contact.
  • We throw an exception if there is an error
  • We set the socket stream to non-blocking.
  • We write an HTTP request to the socket.
  • We define a variable $content in which to store the response.
  • We read data from the socket and append it to the response received so far.
  • We repeat step 6 until we reach the end of the stream.

Note the stream_set_blocking() call we make. This sets the stream to non-blocking mode. We feel the effect of this when we later call fgets().  The second parameter we pass to fgets() is the number of bytes we want to read from the input buffer (in our case – 2048). If the stream mode is blocking, then fgets() will block until it can give us 2048 bytes or until the stream is over. In a non-blocking mode, fgets()  will return whatever is in the buffer (but no more than 2048 bytes) and will not wait if this is less than 2048 bytes.

 

Although we are now using non-blocking input this function still behaves as the original file_get_contents(). Because of the loop in it, once we call it, we will be stuck until it’s complete. We need to get rid of this loop, or rather – move it out of the function.

We can break down what the function does in four steps:

  1. Initialization – opening the socket and writing the request
  2. Checking if we’ve reached the end of the stream
  3. Reading some data if not
  4. Returning the data if yes

Disregarding the loop, we can organize those parts in a class. The first three steps we will implement as methods, and instead of returning the data, we will simply expose the buffer as public.

class URLFetcher
{
    public string $content = '';
    private $fp;
    public function __construct(private string $url) {}

    public function start(): void {
        $host = parse_url($this->url)['host'];
        $this->fp = @stream_socket_client(...);
        if (!$this->fp) {
            throw new Exception($errstr);
        }
        stream_set_blocking($this->fp, false);
        fwrite($this->fp, "GET …");
    }

    public function readSomeBytes(): void {
        $this->content .= fgets($this->fp, 2048);
    }

    public function isDone(): bool {
        return feof($this->fp);
    }
}

Rebuilding the loop

Now we need to rebuild the loop. This time, instead of executing one loop per file, we want to have multiple files in one loop.

Because we now have many news services to fetch data from, we have refactored our initial code to hold their names and URLs in an array.

$services = [
    'europe' => 'https://api.europe-news.org?q=%s',
    'asia' => 'https://api.asia-news.org?s=%s'
    ...
];

For each service we will create a URLFetcher and ‘start’ it. We will also keep a reference to each of the fetchers.

$fetchers = [];
foreach ($services as $name => $url) {
    $fetcher = new URLFetcher(sprintf($url, $topic));
    $fetcher->start();
    $fetchers[$name] = $fetcher;
}

Now we will add the loop in which we will iterate through the fetchers, reading some bytes from each of them upon each iteration.

$finishedFetchers = [];
while (count($finishedFetchers) < count($fetchers)) {
    foreach ($fetchers as $name => $fetcher) {
        if (!$fetcher->isDone()) {
            $fetcher->readSomeBytes();
        } else if (!in_array($name, $finishedFetchers)) {
            $finishedFetchers[] = $name;
        }
    }
}

The $finishedFetchers array helps us track which fetchers have finished their work. Once all of the fetchers are done, we exit the loop. The data gathered is accessible through the $content property of each fetcher. This simple way of downloading data concurrently gives us an incredible performance boost.

Having successfully solved our performance issues, we beat the competition and our business continues to grow. With it – the requirements towards our engine.

One of the new features we need to implement in the next trimester is a history of all the topics our users had searched for and the results they got for them. For this we want to use a SQL database, but when attempting to add it to the mix, the numerous inserts we perform for each topic slow down our service significantly. We already know what the problem is – the execution of the database queries is blocking and thus each insert delays the execution of everything else. We immediately take action and develop our own implementation of functionality for concurrent DB inserts. However, adding those to the loop we have proves to be quite a mess. The inserts need looping over and tracking of their own, but they also need to track the requests to the services, because we can not do an insert before having the data from the respective new service. Once again, we have to rethink our lives.

Generalizing the Loop

It is clear to see that if we want to take advantage of other non-blocking operations we would need to have some sort of a handy generic way to add more things in the ‘driving’ loop. We need a loop which makes it possible to dynamically add more things in it to get executed. It turns out creating such a loop is quite simple.

class Loop 
{
    private static array $callbacks = [];

    public static function add(callable $callback) 
    {
        self::$callbacks[] = $callback;
    }

    public static function run() 
    {
        while (count(self::$callbacks)) {
            $cb = array_shift(self::$callbacks);
            $cb();
        }
    }
}

The $callbacks array acts as a FIFO queue. At any point in our program we can add functions to it to get executed. Once we call the run() method, functions on the queue will start being executed. The run()  method will run until there are no callbacks in the queue. This can potentially be forever as each of the callbacks may add new callbacks while being executed.

Next step would be to adapt our downloading tools. We can create a small function to work with our file fetcher class and with the loop.

function fetchUrl(string $url) {
    $fetcher = new URLFetcher($url);
    Loop::add(fn () => $fetcher->start());

    $tick = function () use ($fetcher, &$tick) {
        if (!$fetcher->isDone()) {
            $fetcher->readSomeBytes();
            Loop::add($tick);
        }
    };
    Loop::add($tick);
}

In this new version of fetchUrl() we instantiate a fetcher and add a callback to the loop which will start the download. Then we create a closure which we also add to the loop. When called the closure will check if the fetcher is done and if it’s not it will read some bytes and add itself to the loop again. This will ‘drive’ reading from the stream until the end is reached.

All we have to do now is add all our services to the loop and start it:

foreach ($services as $url) {
    fetchUrl($url);
}
Loop::run();

This will indeed download the data from all of the services we need, but we have a major problem – we don’t have any means to get the results. We can not get them from fetchUrl(), because it returns before the download has even started. We also want to record the fetched results to the database (remember the new feature we’re implementing) and we want to do this during the download. Otherwise we would have to wait for a new loop for recording things and this would slow us down.

IPC NEWSLETTER

All news about PHP and web development

 

The solution to our problems is to add one more parameter to fetchUrl() – a callback function which will get called when downloading the data is complete. As a parameter this callback will take the downloaded data and in its body it will initiate the insertion in the database.

Bellow is the new fetchUrl() with the changes in red:

function fetchUrl(string $url, callable $done) {
    $fetcher = new URLFetcher($url);
    Loop::add(fn () => $fetcher->start());

    $tick = function () use ($fetcher, $done, &$tick) {
        if (!$fetcher->isDone()) {
            $fetcher->readSomeBytes();
            Loop::add($tick);
        } else {
            $done($fetcher->content);
        }
    };
    Loop::add($tick);
}

And now the updated initialization:

$results = [];
foreach ($services as $name => $url) {
    fetchUrl(
        $url,
        function (string $content) use ($name, &$results) {
            $results[$name] = $content;
            insertIntoDatabase($content);
        }
    );
}
Loop::run();

The callback now collects the results from the news service and initiates the database insert. The database insert will use similar techniques and will take advantage of the Loop to run concurrently with the other tasks and thus we eliminate the need to wait for another loop.

Error Handling

There are many things that can go wrong while downloading data from the Internet, but in our example we are only throwing one exception from within the start() method of the URLFetcher. For the sake of simplicity we are going to keep it this way. You may have noticed that so far we haven’t been dealing with this exception at all. Time to address this oversight.

A naive colleague from our imaginary team tried to handle the issue by enclosing the calls to fethcUrl() in a try-catch  block like this.

foreach ($services as $name => $url) {
    try {
         fetchUrl(...);
    } catch (Exception $e) {
         ...
    }
}
Loop::run();

The production environment quickly and painfully demonstrated to our team that the exception somehow slipped out  of the try-catch block and went unhandled into our script to break it.

Well, the thing is fetchUrl()  does not actually throw any exceptions. It merely adds callbacks to the loop. One of those callbacks throws an exception (the one initializing the fetcher), but it does not get called until later on. It is only when we start the loop (call Loop::run()) when the exceptions start being thrown. Enclosing the Loop::run() call into a try-catch block will allow us to catch exceptions thrown from within it, but at this level of handling we won’t know what threw them. And even if we did know that, how would we return to the flow of the respective function after handling the error?

The way we can deal with this situation is by adding one more callback parameter to the fetchUrl() function. The new callback will get called whenever an error occurs. So fetchUrl() will look something like this:

function fetchUrl(string $url, callable $done, callable $onerror) {
    $fetcher = new URLFetcher($url);
    Loop::add(function () {
        try {
            $fetcher->start()
        } catch (Exception $e) {
            $onerror($e);
        }
    });

    $tick = function () use ($fetcher, $done, $onerror, &$tick) {
        if (!$fetcher->isDone()) {
            try {
                $fetcher->readSomeBytes();
            } catch (Exception $e) {
                $onerror($e);
            }
            Loop::add($tick);
        } else {
            $done($fetcher->content);
        }
    };
    Loop::add($tick);
}

And respectively the calling code would now look like this:

foreach ($services as $name => $url) {
    fetchUrl(
        $url, 
        function (string $content)  {...}, 
        function (Exception $e) {...}
    );
}
Loop::run();

Now we can handle error situations properly via the new callback.

Retrospect

By the end of the story, in order to allow concurrent operations, our imaginary team had started practicing asynchronous programming in a single-threaded environment based on non-blocking I/O and an event loop. The last sentence is bloated with terminology and I would like to talk briefly about terms.

Concurrency

Both in computer and in general contexts, this means to be dealing with more than one thing at a time. In our example we were downloading data from multiple Internet services and inserting entries in a database at the same time.

Asynchrony

“Asynchrony, in computer programming, refers to the occurrence of events independent of the main program flow and ways to deal with such events.”

Wikipedia

In our example the main program flow was dealing with downloading data, inserting records, encoding for the clients and sending to them. The “events” outside of the main flow were in fact the events of new data being available for reading, the successful completion of sending data, etc.

Non-blocking I/O

We based our work on the ability to “query” I/O for its availability. The way we did it was to periodically check if we could use the “device”. This is called polling. Since polling requires CPU cycles, our program becomes more CPU demanding than it needs to be. It would have been smarter if we had “outsourced” the polling to some sort of a lower level “actor” like our operating system or a specialized library. We could then communicate with it via events, interrupts or another mechanism. In any case though, whatever this mechanism for communicating with I/O devices was, at the end of the day it would still be built upon non-blocking I/O polling and maybe hardware interrupts.

 

Event loop

Notice we didn’t call our loop an event loop but just a “loop”. This was intentional, because it would have brought confusion as we hadn’t mentioned events anywhere else in the example. An event loop is just a specific version of a loop like ours. It is designed to work in conjunction with event-based I/O communication and thus the name “event loop”. It allows callbacks to be executed when a certain event occurs, making it more “user-friendly” for the programmer, but essentially it is the same thing. Other names for an event loop are message pump, message dispatcher, run loop and more.

… and last, but not least…

Single-threaded

PHP has a single-threaded model of execution and this will most probably always be the case. This means that all of the instructions to the engine (and from there to the CPU) are executed one-by-one and nothing ever happens in parallel. But wait! We just created a program which downloads data in parallel. True – the downloads happen in parallel but the instructions which control the download flow do not. We simply switch from one download to the other, but never in fact execute two things at the same time. This leads to a problem which we must always keep in mind when doing async single-threaded programming. Because instructions are not in fact executed in parallel, if we throw a heavy computation somewhere inside the loop, everything else will be stuck until the computation is complete.

Let’s look into another example in order to better illustrate the problem of blocking the loop.

We want to create a server. As any other server does, it will listen on a port and await connections from clients. Once a client is connected they will be able to make some sort of a request to the server and the server will serve it. Of course, we need to be able to serve multiple clients at the same time.

We can use the technique we’ve discussed so far to create such a server. It would open a port and poll it for connections. When a client connects, it will use non-blocking I/O to communicate with the client and will continue to switch between this communication, checking for new connections and serving already established connections. However, if, for-example, a client requests the server to calculate a fibonacci sequence of a great length, the server will be stuck in this and will not be able to do anything else for the other clients before it finishes. Connections will time out, new ones will not be received, etc. Essentially – the server will be gone. If we want our server to execute heavy computational tasks and still be responsive, we would need to use actual parallelism of execution, either by multi-threading or by spawning new processes to carry the heavy work for us.

So, why do we not do this by default, instead of dealing with this loop-switching thing? Starting and switching between threads and processes is a lot heavier and slower than “staying in one” process/thread and doing the work ourselves. It works perfectly for I/O heavy and CPU light programs (and most of what we do fall into this category). Indeed, however, if we do need those CPU cycles, multi-threading/processing is the way to go.

Final words

These were just the very basic oversimplified building blocks of what asynchronous programming is about. There is a lot more to be said, but this is an article, not a book, so we have to stop somewhere. If you are interested in the topic, I would suggest further research on promises, coroutines and fibers.

Enjoy the rabbit hole that asynchronous programming is!

IPC NEWSLETTER

All news about PHP and web development

 

The post Asynchronous Programming in PHP appeared first on International PHP Conference.

]]>
Laravel vs. Symfony: A Side by Side Comparison https://phpconference.com/blog/laravel-vs-symfony-a-side-by-side-comparison/ Tue, 04 Oct 2022 08:00:48 +0000 https://phpconference.com/?p=84612 When facing the start of a brand new PHP application there is one decision that can’t be overlooked: which framework should you use? Theoretically, you could start with none but assuming the project at hand is anything but trivial that’d probably be a bad idea. The good news is you’re not exactly short of options to choose from.

The post Laravel vs. Symfony: A Side by Side Comparison appeared first on International PHP Conference.

]]>

The bad news is you’re not exactly short of options to choose from… which can make the decision a little harder than you might expect.Then again, even though there are many options out there, simply by scratching the surface you’ll notice that two stand above the rest: Laravel and Symfony.

At a glance, there’s one main difference between Symfony and Laravel which is that Symfony is both an Application Framework and a set of reusable components, while Laravel is simply a Framework (In fact, Laravel uses quite a few of Symfony’s components). For the purpose of this post, I’ll refer to Symfony as a Framework but keep in mind that, because of how it’s built you should consider it a meta-framework. In the following sections, you’ll learn the similarities and differences between them and how to choose the one that best suits the needs of your project.

Points We Will Cover in This Article

  • What they have in common
  • Installation procedure
  • Directory structure
  • CLI tool
  • Configuration
  • ORM
  • Template Engine
  • Framework Extensions
  • Testing
  • Performance
  • Security
  • Internationalization
  • Project governance
  • Popularity

What they have in common

Needless to say, the first similarity you’ll notice is that they are both PHP frameworks. But that’s just the beginning. Here are the most outstanding features of both:

  • Open Source projects
  • Based on the MVC pattern (which means there are no big conceptual differences)
  • A CLI tool is available for common tasks
  • Code is organized in a particular way
  • Testing tools are available
  • Cover the full stack leveraging existing projects (ORMs, Template Engines, etc…)
  • Can be run on multiple platforms (Operating systems and database engines)
  • Have built-in internationalization features
  • Can be easily extended

Now let’s get into the specifics so you can get a better understanding of their differences.

IPC NEWSLETTER

All news about PHP and web development

 

Installation Procedure

For the purpose of showing commands I’ll be using a fresh Ubuntu 21.04 box virtualized using Vagrant, you can find the image here.

Prior to running the commands I’ll be showing, you’ll have to install the following:

When it comes to installation, they’re both fairly easy to start with, since both use composer, there’s not much mystery to it.

Installing Symfony

The easiest way to get a symfony project started is by running the following command:

composer create-project symfony/website-skeleton sfTest

This will, assuming every dependency is met, leave you with a basic web application you can further customize to meet your needs.

An alternative installation method, which is actually recommended by Symfony, is to install a new binary in your system.

Should you choose this option, the command used to create a new project is:

symfony new sfTest --full

There are several advantages to using this method, one of them is you have a very handy command:

symfony check:requirements

Which will help you detect and eventually fix any missing dependencies.

Either way, upon successful installation you’ll get a welcome screen telling you what the next steps are.

Installing Laravel

Laravel also offers different ways of installing, the easiest one is running the following command:

composer create-project laravel/laravel lvlTest

Another way to get a Laravel project started is by previously installing a Laravel installer (much like Symfony) by running:

composer global require laravel/installer

And then:

laravel new lvlTest

Directory structure

Symfony directory structure

If you go into your newly created project and ask for a files list you’ll get this from Symfony:

Your code is expected to be organized as follows:

  • src will contain all of your business logic (Entities, Containers, Services, etc…)
  • templates will hold the presentation code (mostly html-like files)
  • tests will be stored in the tests directory
  • migrations will be the place for database upgrade scripts

And then there are other artifacts that have a particular place in the project:

  • bin is where you’ll find useful CLI tools
  • vendor is where dependencies live (common behavior to every composer-based application)
  • config is where the configuration will be
  • In translations you’ll put the i18n related resources
  • var is a directory used internally by the framework, mostly for optimization purposes

Finally the public directory will be the point of contact with the outside world, here you’ll find the application’s entry point: the index.php file.

Laravel directory structure

In the case of Laravel, this is what you’ll see

In the case of Laravel, the code is distributed along the following directories:

  • app holds the core code of your application
  • database is where you keep all of your database related code (Migrations, initialization scripts, Models, etc…)
  • In tests you’ll store your testing code
  • Your templates and other static files will be stored at resources

The configuration will mostly live within config, though the URL-to-Controller mapping will be stored in routes.

YOU LOVE PHP?

Explore the PHP Core Track

 

CLI tool

In the case of Symfony, the CLI tool is found at bin/console, by simply running this command: php bin/console

From your project’s root directory you’ll get a complete list of available commands:

And then, if you need help with a specific command, you can run:

php bin/console help < COMMAND >

For instance, let’s look for help on make:auth command:

php bin/console help make:auth

Will produce:

It’s very much likely that you’ll be spending quite a bit of time using the console component if you use Symfony.

In the case of Laravel, a similar tool is available right at the root directory of your application: artisan.

If you run:

php artisan

You’ll get a result like:

A very familiar look, right? The fact is artisan is built using the very same symfony/console component, that’s why its UI is so similar to symfony’s console.

IPC NEWSLETTER

All news about PHP and web development

 

Coding style

Coding in Symfony is heavily based around the concept of Dependency Injection, which creates loosely coupled classes, making testing and long-term maintenance easier.

Laravel can be used in a similar fashion but, by default, it proposes the usage of Facades and helper functions, which, while easier to implement, can become a challenge in the long run.

ORM

While virtually any ORM can be used in conjunction with Symfony, the default is Doctrine, which implements the DataMapper pattern.

In Laravel, the default ORM is Eloquent which is based on ActiveRecord. The main difference between these two models is that in Doctrine entities are POPOs, meaning they can be used in a variety of scenarios, even outside of the context of the ORM.

Eloquent proposes a structure where the Models are an extension to a base class which has all the logic for database access.

One advantage to using the DataMapper pattern is the ability to further optimize database operations by queueing them instead of immediately running them.

Configuration

Symfony offers different kinds of configuration, which may be a little hard to understand at first glance.

There’s configuration by environment (.env files) which are meant to hold the very basic information for the application to run such as database access

Then there’s the bundles configuration (YAML, XML, or PHP files located within config/packages) which are meant to determine the behavior of the application in a given environment. For instance, the handling of email should not be the same in production than in testing or development.

Finally, there’s another way to deal with configuration: PHP annotations for PHP < 8 and class attributes for PHP >= 8. This is the most common case for routes and ORM mapping information.

Usually, a combination of all of them is used within a project.

The case of Laravel is a little simpler, there are .env files at the project root and simple PHP based files inside the config.

Template Engine

In Symfony the default template engine used is Twig while Laravel uses Blade as the default template engine. But you can use a different one if you choose to, or you can even opt for not using one at all!

Syntax aside there are no big differences between them, though Blade is usually perceived as simpler than Twig.

You can get a more detailed comparison between both template engines in this post.

 

Framework extension

In the case of Symfony, the way to do it is by creating bundles. In Laravel’s terminology, these are called packages.

In both cases, they can be distributed as standalone code libraries which can be brought into future projects.

Installing a bundle in Symfony means downloading the code, editing the appropriate YAML file, and indicating the bundle should be loaded at runtime by editing the file config/bundles.php.

This used to be somewhat complicated but nowadays, thanks to Flex things got as easy as running a simple composer command such as:

composer require symfony/apache-pack

Which will take care of everything, leaving the bundle setup with a sensible default configuration.

In the case of Laravel the process is similar, just run a command like:

composer require laravel/breeze –dev

And everything you need to use the new package will be in place.

Testing

Symfony proposes the distinction between three types of tests: Unit, Integration and Application.

The unit tests are not really specific to either one as they both rely on phpUnit for the purposes of QA for individual classes.

In the case of Integration tests, Symfony offers a base class called KernelTestCase, which extends the standard TestCase to allow the usage of the dependency injection container for testing several parts of the application at once.

Finally, the Application tests are not very different from the others in terms of code, but the idea behind them is to simulate user interactions, thus they rely on an HTTP client and a Dom Crawler.

Laravel proposes the usage of Feature tests (on top of Unit tests of course).

Feature tests aim to encompass large parts of the application, much like Symonfy’s Application tests.

In general, writing a Feature test for Laravel can be easier than the analogous for Symfony, since Laravel’s TestCase class offers a simpler API.

Performance

The performance measurement is always an obscure matter as it can be composed of many variables but most benchmarks give Laravel the upper hand when it comes to application speed.

However, Symfony is known for its many optimization options so, should performance really be an issue for you, there are levers to pull.

Security

Symfony’s security system is powerful but, at the same time, a little complex to set up.

It allows for different ways of authentication as well as a very fine grained permission model.

Laravel uses a rather simpler approach to security but, in the most common cases, the basic features will be enough.

Internationalization

Symfony supports several translation formats (PHP, .po .mo, XLIFF, etc…), while Laravel uses PHP and JSON only.

IPC NEWSLETTER

All news about PHP and web development

 

Deployment

The deployment of a Symfony application implies:

  1. Uploading the code
  2. Creating the cache and logs directory and assigning appropriate permissions
  3. Creating environment files
  4. Installing third party libraries
  5. Building the cache
  6. Configuring the webserver

[SymfonyCloud] is a hosting service specifically designed to host Symfony applications.

The deployment of a Laravel application implies:

  1. Configuring the webserver
  2. Installing third party libraries
  3. Optimizing cache configuration loading
  4. Optimizing route loading
  5. Optimizing view loading

Two services are available for helping with deployment tasks: [Laravel Forge] and [Laravel Vapor].

Popularity

On the popularity front Laravel is definitely the winner these days.

Here’s a comparison on Google Trends:

 

And by GitHub stats: Symfony has 25.5k stars and Laravel has 65.8k stars.

Another number to have in mind is regarding the community behind the frameworks: Symfony has 2381 contributors while Laravel has 561.

Project governance

Release cycle

The symfony project has a very strict release calendar: you can expect a new minor version every 6 months and a major version every two years.

You can find more details about it [here].

Recently Laravel moved from a 6 months to a 12 months major release cycle. You can read more about this change [here].

Support for older versions

Laravel offers both regular and LTS releases.

For LTS releases bug fixes are provided for 2 years and security fixes for 3 years, in opposition to 18 months of bug fixes and 2 years of security fixes for regular ones.

Symfony also has a release cycle based on regular and LTS versions.

In the standard versions, bugs and security issues are fixed for 8 months and LTS versions get support for 3 years when it comes to bug fixing and 4 for security issues

Documentation

While both frameworks have extensive and detailed documentation available, Laravel’s docs are usually easier to read, understand and put to practice while Symfony’s work better as a quick reference once you know the basics.

 

Which One is Best for Your Software Development Project?

Of course, the answer depends on many factors but, in order to make a fair comparison, let’s say your team doesn’t possess any expertise in either.

Laravel was designed with RAD in mind, so in case your deadline is tight, you probably want to go with it, especially if your developers’ seniority is closer to junior than senior.

Symfony was designed with long-term maintainability in mind, which makes it more suited for business environments.

If you’re looking at an application that will be a critical part of your organization’s success then you probably want to give Symfony a try as it will be a better base for more maintainable software over time.

Another factor to keep in mind is how easy it will be to bring new developers into your team should the need arise.

In general, it’s easier to find Laravel than Symfony developers so, unless you’re willing to train new members of your team or risk delays in staffing projects, you’re going to be better off going with Laravel.

Followup resources

Here are a few resources to complement what you’ve just learned

https://kinsta.com/blog/php-frameworks/

https://symfony.com/doc/current/best_practices.html

https://laracasts.com/series/laravel-8-from-scratch

https://youtu.be/4m3qUkIyPY8

Conclusion

As you learned here, both frameworks are robust and mature tools which offer great features for professional and agile development using PHP.

No matter which one you choose, if you follow general best practices you can always switch without too much trouble, so pick the one you feel more comfortable with today and start building apps!

The post Laravel vs. Symfony: A Side by Side Comparison appeared first on International PHP Conference.

]]>
A simple alternative to XML, JSON, and co https://phpconference.com/blog/a-simple-alternative-to-xml-json-and-co/ Fri, 15 Jul 2022 14:30:33 +0000 https://phpconference.com/?p=84259 Have you ever wondered why comments in XML are written in such a complicated way or why JSON doesn’t offer comments at all? Have you ever needed to fix an indentation error in your YAML file? If so, then you already know some of the hurdles in common text-based data formats. This article will take a look at an alternative: the Simple Markup Language.

The post A simple alternative to XML, JSON, and co appeared first on International PHP Conference.

]]>

Whether they’re configuration files, for data exchange between client and server, or for object serialization, text-based formats like XML, JSON, YAML, and co. are ubiquitous for developers. But it’s not just developers confronted with these formats. Software users, support specialists, administrators, and consultants also work with files and data streams in these formats for configuration or error analysis. Although their basic concepts are relatively easy to understand, not everyone can easily find their way around the sometimes extensive rules.

Even advanced developers don’t know all the lengthy specification details by heart. The 84-page long YAML specification is an example of this, along with the XML specification. If a format also offers alternatives for how to implement something, even experts start wondering. For example, YAML offers nine different ways to write a multiline string [1]. And anyone who has defined an XML format is guaranteed to have had a discussion or two about whether a value should be written as an element or as an attribute.

IPC NEWSLETTER

All news about PHP and web development

 

Also writing documents in these formats isn’t easy for everyone. Even if programming languages train developers to write many special characters, ten-finger typists are especially aware that special characters can significantly influence writing speed. That’s why some German developers switch their keyboard layout to the US layout in order to type special characters more easily.

Another aspect is the issue of readability. A JSON document that’s minimized down to one line is only meaningfully readable, if a developer has tools for formatting or better rendering. How often are documents with sensitive data put into Pretty Printer web pages without knowing where the data is sent? And someone has likely already come across an XML document displayed in Internet Explorer.

The question is, can we find an alternative data format to XML, JSON, and co. that:

  • has reduced its set of rules to a minimum, yet remains functionally equal,
  • is easy and fast to write,
  • is readable even without special tools and,
  • is easy to understand and intuitive, even for non-experts?

The Simple Markup Language, or SML for short, targets exactly these requirements. In the following sections, we’ll take a look at SML’s basic concepts and notation, and at the end, we’ll highlight some potential application areas.

The first example

To get started, let’s consider the following example of an SML geodata format describing a prominent geographic point. In this case, it’s a Seattle city landmark, the Space Needle observation tower.

PointOfInterest
  City		Seattle
  Name		"Space Needle"
  GpsCoords	47.6205 -122.3493
  # Opening hours should go here
End

You can see that SML is a line-based format. The first line that starts the document gives an indication about its content. The second line defines the city that the point of interest is located in and represents an attribute. The attribute name and the attribute value are separated by several spaces, and there isn’t a special character like a colon or an equals sign between them. Line three contains the landmark’s name. In contrast to line two, the attribute’s value is written in double quotes. These are written because the name itself contains a space. The next line represents the point’s GPS coordinates. Attributes can contain several values. As in this case, these are separated from each other with spaces or other whitespace characters and are written one after another. The second to last line contains no information other than a comment starting with a hash that goes to the end of the line. The last line contains the word End and closes the document. You’ll notice that SML gets by with relatively few special characters. Even someone who isn’t an expert can easily type this text and it wouldn’t take them very long.

XML, JSON, and YAML

Now, let’s compare the SML document with an XML document containing the same information (Listing 1).

<?xml version="1.0" encoding="UTF-8"?>
<PointOfInterest>
  <City>Seattle</City>
  <Name>Space Needle</Name>
  <GpsCoords lat="47.6205" long="-122.3493"/>
  <!-- Opening hours should go here -->
</PointOfInterest>

The first thing to note is that this is just one way that information from the SML example can be mapped to XML. For example, the GPS coordinates could be represented as sub-elements instead of attributes, or as InnerText separated by special characters, which is later split into two components. The developer’s preference influences which option they choose. XML is a powerful markup language that can be used to represent data in a structured way and format text in the true sense of a markup language. However, in this example of a structured dataset, we can see that many more special characters are used. Attribute values must be written in double quotes and closing tags have the same name as opening tags. If we held a small typing competition without any special tools, the person typing SML would probably finish first. Besides the XML declaration in the first line, and typing the comments, XML’s main hurdles can be points such as namespaces, or specification details. For example, are line breaks allowed in attributes? Can attributes be commented out? And can you enter the syntax of a CDATA block from your head?

EVERYTHING IS CONNECTED TO THE INTERNET

Explore the Web Development Track

 

For comparison, let’s consider another widely used standard: the JavaScript Object Notation. In JSON, our geodata example would look like this:

{ "City":	"Seattle",
  "Name":	"Space Needle",
  "GpsCoords":	[47.6205, -122.3493],
  "_comment": "Opening hours should go here" }

JSON is a simple data format that’s very popular, especially due to its interaction with JavaScript in the browser. It’s used for client-server communication, for custom data formats as an alternative to XML, as a configuration format, and more. It builds upon the use of double quotes, colons, commas, square brackets, curly brackets, and some keywords to describe data structures and types. String values must be written in double quotes, and C-compatible escape sequences allow a JSON document to be written entirely in just one line. A forgotten or superfluous comma at the end leads to a syntax error. Comments that can be written in JavaScript with // and /* */ aren’t allowed in the JSON standard. For serialization formats, this might not be a big deal. But for formats where you comment in and out parts, such as configuration files, this can be impractical and can lead to some workarounds or alternative formats. These range from key-value pairs—where a special prefix in the key identifies the pair as a comment—to preprocessors that filter out comments before parsing the document, to formats like Hjson [2] and JSON5 [3]. Unlike SML or XML, the JSON root element doesn’t have a name. So by default, a JSON document doesn’t have an identifier that gives an initial hint about the document’s contents.

YAML is a common alternative to JSON. YAML combines JSON’s syntax with a special character reduced notation based on indentation rules similar to Python. In YAML, our geodata example looks like this:

City:		Seattle
Name:		Space Needle
GpsCoords:	[47.6205, -122.3493]
# Opening hours should go here

Just like SML, YAML supports single-line comments beginning with a hash. The GPS coordinates are written like a JSON array, but they can also be written individually in multiline notation, preceded by a hyphen and at least one prefixed whitespace. A colon—also followed by at least one whitespace character—is used to separate keys and values. Although YAML appears simple at first glance, its pitfalls lie in its many special rules, which can be quite complex [1]. If you prefer using tabs for indentation, you’ll be disappointed. In YAML, using spaces is mandatory.

In general, the more special characters are used and the more extensive the ruleset is, the more difficult it gets for non-experts and experts alike. The more rules there are, the more can go wrong, and the format’s robustness suffers. Therefore, the ruleset in SML is reduced to a minimum, as is the amount of special characters used. This makes the format robust, simple and fast to type, and easy to learn. Now, let’s take a look at how exactly SML works.

Simple Objects

Before we take a closer look at the Simple Markup Language’s rules, let’s consider the data structure behind SML documents. An SML document represents a hierarchical data structure called a Simple Object. This hierarchical data structure is built from two kinds of nodes: elements and attributes. A simple object has strictly one root element. The root element can contain more elements or attributes. Elements are used for grouping, while attributes contain the actual data in the form of string values. Both kinds of nodes are named. Elements are named groups of child nodes, and attributes are named string arrays.

Figure 1 shows our SML geodata example as a hierarchical data structure. The Simple Object consists of the root element PointOfInterest. Three attributes are subordinate to this root element. The first two attributes—City and Name—contain just one value each. The third attribute—GpsCoords—contains two values.

Fig. 1: Visualization of the SML data structure in the geodata example

Attributes must contain at least one value, but they can also contain empty strings or null values. An element’s child nodes are ordered and can have identical names. For example, it’s possible to subordinate several attributes that have the same name to one element. Elements don’t necessarily have to contain nodes—they can be empty.

There are no restrictions concerning node names, except that you cannot use null values. All Unicode characters are allowed and the character order can be randomly chosen. It is important that names are case-insensitive, meaning that upper and lower case characters are not considered. So in our example, we could write the City attribute entirely in lowercase or the Name attribute entirely in uppercase and it would make no difference. Later, we’ll take a closer look at why these properties are important in an easy-to-write format.

Serialization of Simple Objects

Now that we know the data structure behind SML documents, let’s look at their serialization. The simplest form of serialization for a machine is converting to binary format. Then, no parser is needed for reading and the bytes only have to be processed according to a given schema. This form of serialization is machine-friendly, but isn’t human-friendly at all. Alternatively, serialization to a common text-based format like XML or JSON is also possible. But since these are usually case-sensitive or, like XML, have naming restrictions, mapping is tedious and needlessly bloats the resulting documents.

IPC NEWSLETTER

All news about PHP and web development

 

This is where the Simple Markup Language comes in, offering the textual representation of a Simple Object reduced down to the minimum. The basic concept is simple. It is based on the approach that in a textual representation, two text entries with an unknown length must always be separated by at least one character. For this, common formats use colons, commas, equal signs, and other special characters. But why don’t we just use a space and the Enter key? A line break can be represented with just a single character and a space is enough of a visual separation between two values. We’ve therefore arrived at a line-based format.

But how do we distinguish between the two kinds of nodes without marking them with special characters? The trick is simple. Consider each line as a set of values separated by a single or multiple connected whitespace characters. If the line contains just one value, it’s an opening or closing element. If it contains at least two values, then it’s an attribute. If the line doesn’t contain any values, it’s an empty line that doesn’t contribute to the Simple Object’s content.

Whitespace-separated values (WSV)

This concept is comparable to a CSV file (Comma-separated values). Here, values are separated from each other with a separator like a comma or a semicolon. In the case of SML, the separators are a group of characters—whitespace characters. This includes tabs and other Unicode whitespace characters, as well as the space character. So, one line in an SML document is a WSV line and the entire document is a WSV document [4]. If a value itself contains whitespace characters, then we simply enclose it with double quotes. Let’s take a look at the other special rules (Listing 2).

ValueWithSpace		"Hello World"
ValueWithDoubleQuotes	"Hello ""World"""
EmptyString			""
Null				-
OnlyOneHyphen		"-"
MultilineText		"Line 1"/"Line 2"
ValueWithHash			"#This is not a comment"

If a value contains a double quote character, then it must be written in double quotes and the character must be replaced by the escape sequence “”. An empty value is represented by two double quotes directly following each other and a null value is represented with a hyphen. In this convention, a single hyphen as a value must also be written in double quotes in order to differentiate it from the null value.

Concerning multiline values with line breaks, SML takes the following approach: To obtain a truly line-based format, the linefeed line break character is replaced with the escape sequence “/”. The advantage is that an attribute line can be in one line, even with multiline values. The document’s structure is still recognizable. Another advantage of this approach is that a self-written parser that splits the document string into its lines with a simple string split call won’t return wrong results. In the example, we see how a two-line value is written in one line.

Since the hash sign in SML marks the beginning of a comment, a value containing the hash sign must also be written in double quotes. And with this point, the list of rules is already done. No further characters need to be replaced by escape sequences. The result is that many values, even those with exotic special characters, don’t need double quotes.

The End

We’ve already defined that an SML line with only one value represents an opening or closing element. We need a special keyword to differentiate which of the two cases we’re dealing with. The most obvious solution is to use a keyword that’s been used in line-based programming languages for decades: the keyword End. This is used by default. But SML goes a step further and allows for arbitrary values as end keywords. The reason is that an SML document doesn’t need to be written in English, it can also be written entirely in a different language. For example, the SML document in Listing 3 is written in German.

Vertragsdaten
  Personendaten
    Nachname Meier
    Vorname Hans
  Ende
  Datum 2021-01-02
Ende

To make this work, when the document is loaded, the parser goes to the end of the document, determines the end keyword, and begins interpreting the lines from the top. This concept allows for comprehensive localization and completely automatic reading without having to specify what the end keyword is.

 

In contrast to the Simple Object, this concept imposes a restriction on naming elements. An element cannot have the same name as the end keyword, or else the hierarchical structure won’t be recognized correctly. Otherwise, any name can be chosen, besides the null value. Here, the same notations apply as for values. The following example shows an SML document whose root element and child attribute each contains spaces in their names, and so they must be written in double quotes.

"My Root Element"
  "My First Attribute" 123
End

The name for elements and attributes must not be null to make processing in programs more robust. Another reason is the possibility that SML documents can be minimized. We’ll take a closer look at that now.

Minimization

In SML, indentations help you better identify the hierarchical structure. Listing 4 shows an example of a game’s configuration file with two child elements.

# Game.cfg
Configuration
  Video
    # Set the resolution settings here
    Resolution   1920 1080  #Alternativ 1280 720
    RefreshRate  60
    Fullscreen   true
  End
  Audio
    Volume 100
    #Music 80
  End
End

Unlike YAML, indentation isn’t mandatory in SML and all whitespace characters can be used as you wish. Attribute values don’t need to be separated from the attribute name by only one whitespace character and the following values as well can be indented arbitrarily. Comments are also possible everywhere. For example, in the SML configuration file, a comment was left behind the Resolution attribute and the Music attribute was commented out completely. Completely empty lines or lines with only a comment are possible.

This formatting freedom contributes to SML’s robustness and usability. But for a machine, indentations and comments aren’t important and can be removed. A document reduced to the bare minimum makes sense, especially in the context of client-server communication, where data sizes play a large role. JSON also offers the possibility of minimization. Here, whitespace is completely removed and the document is reduced to a one-liner. This is good for data size. But readability suffers and it leads to the previously mentioned use of Pretty Printers. On the other hand, SML is readable even when minimized, since it preserves line breaks. It might seem counterintuitive to some people, but keeping the line breaks does not hinder minimization, since it only takes one character for a line break. Listing 5 shows the configuration example minimized.

Configuration
Video
Resolution 1920 1080
RefreshRate 60
Fullscreen true
-
Audio
Volume 100
-
-

All comments have been removed and indentation is missing. Now, values and attribute names are only separated with a single space. The end keyword has been replaced by a null value and when serialized, it is represented with only a hyphen. Since element names must not be null, no name collisions can happen here. Minimization is therefore always guaranteed. Even without Pretty Printer, the document’s contents are easily recognizable.

Encoding

If you store SML documents as files or serialize them as byte arrays, keep in mind that regarding encoding, SML documents are ReliableTXT documents. ReliableTXT [5] is a convention that specifies how text files are encoded, decoded, and interpreted. The specifications are chosen so that common encoding problems can be avoided. This is done with the mandatory writing of an encoding preamble (a short byte sequence) which clearly identifies the encoding used. The preamble means that we don’t need to guess the used encoding and instead, a reliable reading is possible. ReliableTXT limits the potential encodings to just four Unicode encodings. These are UTF-8, UTF-16 in little- and big-endian, and UTF-32 big-endian. The BOM (Byte Order Mark) must be written for all of them. If it’s left out, an SML loader must indicate an error and must not read in the document. This might sound strict, but it’s necessary in order to enable reliable reading. It’s important to know that the preamble isn’t part of the text content.

ReliableTXT also uses a different name for the byte order. The little-endian order is called reverse and the big-endian designation is left out altogether. This is an alternative mnemonic for remembering the byte order. It’s based on the fact that in languages like English or German, numbers are written in big-endian—the first digit has the highest value. For such languages, big-endian is the normal case and little-endian is reversed notation.

IPC NEWSLETTER

All news about PHP and web development

 

When it comes to line breaks, ReliableTXT also walks its own path. The Unicode Standard knows seven different characters that can be interpreted as line breaks. ReliableTXT is limited to a single character for a line break—the line feed character. But ReliableTXT files are not POSIX/Unix text files because the lines aren’t terminated with the linefeed character—they are separated by it. This is similar to the concept of Windows text files, but without using the Carriage Return character. Since the Carriage Return character is considered to be whitespace in an SML document, you won’t run into any problems if an SML document is ever written with Windows line breaks.

Possible uses

SML is a universal format and can be used in a wide variety of areas. SML is especially easy to use in the field of structured data. Possible application areas are formats for configuration and localization, 2D and 3D graphics formats or UI, geodata, multimedia, and manifest formats. A simple format for recipe instructions is possible as well as complex data structures. Using SML is especially useful if a file needs to be viewed or directly modified in the text editor—for instance, if there’s no program with a visual interface yet. SML can also be used for easily creating files for other programs. You can create a media playlist like the following with any program.

Tracks
  Track Song1 /storage/sdcard0/Music/Song1.ogg
  Track Song2 /storage/sdcard0/Music/Rock.ogg
  Track Song3 https://www.example.com/Pop.ogg
End

One particular strength of SML is the combination of hierarchical data structures and tabular data. Since attributes can contain multiple values, it’s easy to embed tables in SML documents. In the table’s first column, there is a primary key that cannot take null values. The value can be used directly as an attribute name. The example in Listing 6 shows an SML document containing two embedded tables.

Tables
  Table1
    FirstName	LastName	Age	PlaceOfBirth
    William	Smith		30	Boston
    Olivia	Jones		27	Austin
    Lucas	Brown		38	Chicago
  End
  Table2
    City	State
    Boston	Massachusetts
    Austin	Texas
    Chicago	Illinois
  End
End

The formatting flexibility allows you to use tabs or spaces to present data in a more visually arranged manner. This document can be easily typed into any text editor without having to use a single special character.

Client-server communication and remote procedure calls also profit from SML. Where once an XML file was transmitted with AJAX and then later JSON was used, now SML documents could be exchanged between the machines. Size-wise, SML is at least equivalent to JSON and can even achieve smaller sizes without compression.

Conclusion

At the beginning of this article, we asked if we can find an alternative data format to XML, JSON, and co. It needs to have a reduced, minimal set of rules while also staying functionally equal, is simple and fast to write, as well as readable, easy to understand, and intuitive even for non-experts. Of course, the last point is purely subjective because everyone decides for themselves what is understandable and intuitive. The first point depends on the purpose that a format is used for. If a text will be formatted in a very finely-granular way, then a language like XML or Markdown is likely a good choice. But when it comes to purely structured data, SML is in no way inferior to widely used standards like XML, JSON, and YAML. Especially when it comes to manually modified documents or their display in text editors, SML scores points. It stands out because of its simple notation. People who type with ten fingers will likely find the format particularly pleasant, as it’s faster to type due to the reduced amount of special characters and case insensitivity. Regarding the amount of rules, SML has reached a minimum that could likely only be exceeded by getting rid of some functionalities.

Other than that, all that’s left to say is: Just try it out. You can test out SML online directly in the browser (See box: “Just try SML in the browser”) [6]. Reference libraries are available for the top ten TIOBE languages such as Java, PHP, C#, JavaScript, and Python [7] and there are many tutorials for getting started [8]. With this in mind: happy SML typing!

Just try SML in the browser

Website: open www.simpleml.com
Select the Try SML Online item
Enter the SML document, parse it, and display it as a collapsible and expandable node representation

Links & Literature

[1] https://www.arp242.net/yaml-config.html 

[2] https://hjson.github.io 

[3] https://json5.org 

[4] https://www.whitespacesv.com 

[5] https://www.reliabletxt.com 

[6] https://www.simpleml.com 

[7] https://github.com/Stenway 

[8] https://www.youtube.com/channel/UCSVt-9JcnxfTFnztnLEQQug

The post A simple alternative to XML, JSON, and co appeared first on International PHP Conference.

]]>
Keynote: How to design for the metaverse? https://phpconference.com/blog/keynote-how-to-design-for-the-metaverse/ Tue, 07 Jun 2022 12:46:39 +0000 https://phpconference.com/?p=83918 Calling all creators! How are you going to build the metaverse? Will it be an ever-expanding, sprawling virtual city? Or will it be a discussion of infinite interactions and boundless possibilities? No one can predict how our next-gen future, the metaverse, will look and feel. So how can you prepare for it? We share the skills and tools you’ll need to make you and your business metaverse ready.

The post Keynote: How to design for the metaverse? appeared first on International PHP Conference.

]]>
Metaverse is the next big thing and yet no one can predict what the next-gen future will look like. So how can you prepare for it? In his keynote, Guillaume Vaslin shares the skills and tools you will need to make you and your business metaverse ready.

 

IPC NEWSLETTER

All news about PHP and web development

 

The post Keynote: How to design for the metaverse? appeared first on International PHP Conference.

]]>
The wonderful world of Contao https://phpconference.com/blog/the-wonderful-world-of-contao/ Mon, 11 Apr 2022 11:45:59 +0000 https://phpconference.com/?p=83471 Contao recently celebrated its 15th birthday and is still greatly popular in the community. It's high time to take a closer look at the smart open source CMS.

The post The wonderful world of Contao appeared first on International PHP Conference.

]]>

When Contao first saw the light of day in 2006, it was called TYPOLight. But the name caused some confusion, as many people immediately pictured a slimmed-down version of another well-known CMS. This does not do Contao justice. The CMS is a full-fledged system that doesn’t need to hide behind big names on the market and can be used for enterprise projects without hesitation. Therefore, the name was changed to Contao in 2010. Its TYPOLight past can still be seen in the database, as all tables start with tl_. This brings us to the technology. Contao is written in PHP and uses a MySQL database to store its contents. Since the release of version 4 in 2015, it uses Symfony as its framework. At the time of writing, the project has already reached version 4.11 and long-term support is also available. We want to take a look at what you can do with this open source CMS. Let’s start with the installation.

Everything is under control with the manager

Contao can be integrated into an existing Symfony project [1], but you can also use the Managed Edition, which takes care of the Symfony configuration. Installation is done via Composer, but if you aren’t comfortable with the console, don’t worry. The Contao developers created a useful tool; the manager [2] serves as a graphical interface for Composer and system maintenance tasks (for example: clear cache). To use it, simply download a file and copy it to the webserver. The user will be guided through the installation. With the help of the manager, the packages can be used and managed, (Fig. 1) or extensions can be installed. Composer is used in the background for this, but you can continue using the command line in parallel. In a shared project, developers can use tools they are familiar with, while designers can use a graphical interface. Composer sometimes requires a lot of memory, which may not be available in every hosting package. But Contao has a solution for this too: the Composer Resolver Cloud [3]. A composer.json file can be sent there. The cloud handles the dependency resolution and returns the finished composer.lock file. This process is transparently integrated into the manager. This means that Contao can also be installed on webspaces with less powerful hardware. Once the files are installed via the Contao Manager or the command line, the install tool must be called to enter the database and create tables. The reason this is done in a separate tool has historical reasons, as the Installtool has been around much longer than the Contao Manager. In the future, tasks from the Installtool will likely be integrated into the manager. An admin account is created in the Installtool, which can be used to log into the backend.

IPC NEWSLETTER

All news about PHP and web development

 

Fig.1: Package management in Contao Manager

Website design made easy

In the backend, navigation is located in the left column and is divided into four areas: CONTENT, LAYOUT, USER ADMINISTRATION, and SYSTEM. Web designers will spend most of their time in the LAYOUT area, creating modules or page layouts, and customizing templates. Editors will manage content, consisting of articles, news, events, and more. USER MANAGEMENT deals with frontend and backend users. Under the SYSTEM tab, you will find the file management, general settings, and log.

Fig. 2: Page structure

In the menu item PAGE STRUCTURE, you can define the website’s structure. The root of the page tree is a starting point where the language and website domain is determined. With Contao, you can manage multiple domains in one installation. In the case of a multilingual website, a page tree is created for each language; Figure 2 shows an example of this type of page structure. The name and URL can be determined for each page and you can enter the metadata needed for search engines. If you leave the URL field empty, Contao automatically generates an address based on the page name. For example, “About us” becomes about-us. The suffix .html is appended to each page address, which can be deactivated from a setting at the starting point. You can also set whether nested URLs are generated for nested pages, or whether URLs should be output without hierarchy. You can configure whether a page is published by manually clicking on the eye icon. This mechanic exists for most elements in Contao. Generally speaking, the operation is very consistent. The icons on the right side of the figure run throughout the system. The pencil will take us to the settings, the green plus sign creates a copy, the blue arrow lets us move, and the red cross deletes an element.

You can also copy an entire page tree including all of its content. This is especially useful when creating a multilingual website. First, finish creating the website in the main language, then you can copy and translate it.

 

After defining the structure of the website, you can work on the appropriate theme. This consists of page layouts, modules, templates, and CSS, which allow you to create any number of page layouts. These can be assigned to individual pages in the page structure, with top pages giving their layout to the bottom pages. For example, a default layout can be assigned to the starting point, which is replaced by a more specific layout for certain pages. In the page layout settings, we define the basic structure (Fig. 3) and which layout areas (header, footer, left column, right column) should be included on the page along with the main column. There is a corresponding container for each element in the page’s HTML. You can fix the width or height of elements, but it’s recommended that you manage this with CSS.

You can also define your own layout areas. These can be placed relative to a standard container or freely in the template. You can also select which CSS and JavaScript files you want to include in the page layout. Contao compiles SASS and LESS files on request and, if needed, you can activate Google Analytics or Matomo in the page layout. Service IDs and other settings are entered into a template. The advantage of this is that the same configuration is used in all page layouts.

Fig. 3: Area settings in the page layout

To make pages lively, Contao offers frontend modules in the layout areas (Fig. 4). We create the modules in a separate view. Each module has certain options according to its type. For example, in the Navigation Menu module, you can set which level the page tree navigation should start and stop at. For each module, Contao executes the corresponding PHP code and creates a rendered template as HTML. Then, the designer’s task is formatting it with CSS. Templates can be easily customized in the backend (Fig. 5). Even non-developers can influence element design. The original template is not overwritten, instead, a copy is created. If this is deleted, then the original is used again. There’s no danger of breaking anything. Table 1 shows which modules Contao comes with. If the core modules are not enough, you can get more with extensions. Of course, developers can extend existing modules or program their own.

Fig. 4: Frontend modules in the page layout

 

Name Description
Article Outputs content created under the Articles menu item.
Navigation menu Outputs the whole page tree as a link list. Suitable for the main navigation, for example.
Individual navigation Creates links to selected pages. Suitable for general links in the footer, for example.
Custom HTML code As the name suggests, you can fill this module individually with HTML code. Insert tags can be used for this. For example, the insert tag {{insert_module::10}} inserts the module with the ID 10. This way, you can create nested layouts.
Search engine Allows you to search the website. Contao creates the search index automatically.
Navigation path The path to the current page (Breadcrumb).
Login form Allows members (as frontend users) to log into the website.
News list Outputs the list of news.
News reader Outputs the content of a single news item.
Calendar Outputs a calendar for the events created under EVENTS.

Table 1: Examples of frontend modules in Contao

Fig. 5: The template editor

IPC NEWSLETTER

All news about PHP and web development

 

Content is King

Website content is managed under the menu item CONTENT. Articles are the most important of these and can be created on each page, according to the positions selected in the page layout. Articles consist of content elements (Fig. 6) and, similar to modules, are configured with different options depending on their type. They can be hidden or displayed with the eye icon and can be dragged and dropped. Each element has a template that can be customized if needed. Table 2 shows examples of content elements included in the core. More are available in the corresponding extensions. People who can program can create their own, of course. The WYSIWYG editor TinyMCE is available for texts (Fig. 7).

Fig. 6: The content elements in the backend

Fig. 7: Contao’s WYSIWYG Editor TinyMCE

 

Name Description
Headline A headline
Text A text area editable with a WYSIWYG editor (TinyMCE)
Image An image from the file manager
Download A file offered for download
Accordion Openable and closable containers
Slider A slider for images or other content elements
Gallery An image gallery with Lightbox
HTML HTML code
Code Codeblock with syntax highlighting
Markdown Rendered markdown
YouTube/Vimeo A video from either portal
Module A frontend module
Form A form from the form generator

Table 2: Examples of content elements

 

Besides articles, Contao brings other types of useful content, such as news. You can create as many archives as you’d like. Individual messages can be created here, each with an associated teaser text and image. News content consists of content elements. Per module, the News List and News Content are included in the page layout or in an article. You can also allow commenting on news, so NEWS can be excellent as a blog.

As the name suggests, events can be managed under Events. You can specify a start and end time for each event. The content is designed with content elements. Events are displayed in the frontend either as a list or a calendar. At this point, we should also mention the form generator. This allows forms to be composed of typical elements, such as text fields, checkboxes, radio buttons, and a select menu (Fig. 8). Input is validated on the client and server side. Each form automatically receives a CSRF token. The form’s content can be sent via email and stored in the database. This way, you can create both simple and more complex forms. Contao also provides a simple newsletter system. The CMS offers built-in versioning for content elements, modules, messages, forms, and practically all other website components. You can quickly revert to a previous state and restore deleted elements.

Fig. 8: The form generator

In FILE MANAGEMENT, uploaded images and documents are displayed in a tree structure; folders and files can be moved with drag and drop. File management is database-driven. If a file is renamed or moved, references in the content elements or modules will not be lost. Contao also offers support for responsive images; different image sizes can be created and assigned to images. You only have to upload an image in its largest resolution and Contao will automatically create smaller versions and output the HTML for the responsive images. You can mark a specific area in the image for cropping. Optionally, you can also activate lazy loading.

 

The topic of web accessibility is becoming increasingly important. Contao brings the basic requirements. Semantic HTML5 elements such as header, main, aside, footer, nav, and article are used in the templates. Microdata is also used where it is wise to do. For example, Breadcrumbs are provided with:

itemprop="breadcrumb" itemscope itemtype="http://schema.org/ BreadcrumbList"

You can specify a title and description for each page. Navigation is generated as lists and can be operated using the tab key. If needed, you can set an individual tab index for specific menu items and assign a keyboard shortcut to them. Each navigation also contains an invisible Skip link. Form fields are created with labels and can be equipped with a tab index. You can provide images with an alternative text and file, either globally in the file management or when including images in content elements.

What matters is people

There are two types of users in user management: Members and users. Members are frontend users. Contao lets you create a locked area that is only accessible to specific members. In the page structure, you can set which pages are shared with certain member groups, and which articles and content elements can be provided with access security. Contao provides log-in and registration forms in the form of modules. You can also activate two-factor authentication if you’d like. In Contao, users are defined as those who use the backend. You can create any number of groups and assign them specific rights. For each user group, it is possible to configure which pages can be edited, what backend areas are accessible (Fig. 9), which content elements are available, and what fields are enabled in input screens. Only displaying the necessary elements for a specific group serves a security purpose, but also increases user-friendliness. Users can be assigned to multiple groups. Their rights will be merged and a flexible rights system will be put into place. You can also activate two-factor authentication for backend users.

An open source project is only worth as much as the people who look after it. Luckily, Contao has a very productive community: There are reliable core developers and many people who further develop the CMS via GitHub or provide extensions on Packagist. Questions from users and developers are answered quickly in the official forum. Events such as the Contao Conference, the Contao Camp, and the Contao Agency Day are held throughout the year. Contao TV became an official video format recently. The CMS is supported by the Contao Association, an official support association that anyone can become a member of for a fee.

Fig. 9: User group settings

As you like it

One of Contao’s greatest strengths is its easy extensibility. Since Symfony is used as a basis, developers who have mastery of this framework can program in their usual manner. Contao also comes with its own framework that was already in use before it integrated with Symfony. You can find extensive documentation at [4]. In the following, we will take a look at a small introduction to development with Contao.

One basic concepts is the DCA: the Data Container Array. It decides how table entries are managed in the backend. For every table, there is one DCA. You can extend existing DCAs or create completely new ones. Basically, the DCA is an array that configures how the table entries will be ordered in the backend, what search and filter options are available, what operations (for example: edit, copy, delete) can be applied, and what fields are available to be edited.

$GLOBALS['TL_DCA']['tl_product'] = [
  'config' => [
    // ...
  ],
  'list' => [
    // ...
  ],
  'fields' => [
    // ...
    'description' => [
      'label' => &$GLOBALS['TL_LANG']['tl_product']['description'],
      'exclude' => true,
      'search' => true,
      'inputType' => 'textarea',
      'eval' => ['mandatory' => true, 'rte' => 'tinyMCE',],
      'sql' => ['type' => 'text'],
    ],
  ],
  'palettes' => [
    // ...
  ],
];

Listing 1 shows an example configuration for a fictional field description of a product table. The field’s label is entered under label. Here, a translation is referenced. The entry under exclude determines if the field will be enabled in the user administration. The search parameter specifies that the content is searchable in the backend. The inputType determines which HTML element is used. Further options are available under eval. In our example, we specify that this is a required field and the WYSIWYG editor should be used. Lastly, the sql entry specifies how data is stored in the database. You can use either the options from Doctrine or enter SQL directly. In addition to the configuration, you can register callbacks that trigger for certain events. For example, it’s possible to execute a specific function when a certain value changes.

It takes some time to get used to the concept of DCA and its manifold configuration options. But once you’ve internalized it, it is incredibly powerful. Almost anything you can imagine in data management is possible with DCA. Contao uses models that work similarly to Doctrine entities to access the table entries. In contrast to Doctrine’s data mapper pattern however, it uses the active record pattern. Therefore, there is no entity manager or repositories. Database operations take place directly in the model classes. See the following code for an example of this point:

$product = ProductModel::findByPk(1);
$product->description = 'Lorem ipsum';
$product->save();

Here, the product with ID 1 is loaded from the database, a description is set, and the product is saved.

IPC NEWSLETTER

All news about PHP and web development

 

As mentioned previously, developers can create their own frontend modules and content elements. This is done by creating controllers that inheret from AbstractFrontendModuleController or AbstractContentElementController. In the controller, you execute the necessary logic and ultimately, render a template. Contao comes with its own templating engine that also allows inheritance. The template used by the frontend module or content element is determined by its name. For example, the ProductListController uses the mod_product_list.html5 template. Listing 2 shows an example implementation of a frontend module that outputs a list of products. In Contao, you can intervene in the systems logic via hooks. This works similarly to Symfony Events. The developer may define and assign parameters to a method, and excecute this method for a specific event. Changing these parameters will influence the system’s behavior. Hooks can be registered using annotations. Listing 3 shows a hook which adds an additional Time field with the current date and time added.

/**
 * @FrontendModule(category="product")
 */
final class ProductListController extends AbstractFrontendModuleController
{
  protected function getResponse(
    Template $template,
    ModuleModel $model,
    Request $request
  ): ?Response {
    $products = ProductModel::findAll();
 
    $template->products = $products;
 
    return $template->getResponse();
  }
}

/**
 * @Hook("prepareFormData")
 */
final class PrepareFormDataListener
{
  public function __invoke(
    array &$submittedData,
    array $labels,
    array $fields,
    Form $form
  ): void {
    $submittedData['Time'] = (new \DateTime())->format('d.m.Y H:i:s');
  }
}

<?php $this->extend('block_searchable'); ?>
 
<?php $this->block('content'); ?>
  <ul>
    <?php foreach ($this->prodcuts as $product): ?>
      <li><<?= $product->title; ?></li>
    <?php endforeach; ?>
  </ul>
<?php $this->endblock(); ?>

Conclusion

Contao has developed brilliantly over the past 15 years and is far from being a “light system” anymore. Thanks to intuitive concepts, creating websites is uncomplicated and flexible for implementing both small and large projects. Its core already includes many things that in other systems would have to be installed via extensions. With the Contao Manager, a unique graphical interface for package installation with Composer has been created. Thanks to Symfony and its own well-thought-out framework, extensions can be created quickly.

Links & Literature

[1] https://docs.contao.org/dev/getting-started/initial-setup/symfony-application/

[2] https://docs.contao.org/manual/en/installation/contao-manager/

[3] https://www.composer-resolver.cloud

[4] https://docs.contao.org/dev/

The post The wonderful world of Contao appeared first on International PHP Conference.

]]>
Simple and practical: Laravel with GraphQL https://phpconference.com/blog/simple-and-practical-laravel-with-graphql/ Mon, 31 Jan 2022 13:19:33 +0000 https://phpconference.com/?p=83135 Lighthouse PHP is a framework for Laravel that simplifies the creation of GraphQL interfaces. Thanks to directives and automation, work for developers is reduced to the essentials: planning and data design. For GraphQL interface users, work is largely facilitated by autocompletion.

The post Simple and practical: Laravel with GraphQL appeared first on International PHP Conference.

]]>

As a client, you determine which data is needed. The flood of data is in your hands. In this article, we will experience this directly in the GraphQL Playground. With this tool, you can easily create and test your own queries. Thanks to tools like the GraphQL Code Generator, using it in frontend frameworks like Vue.js is practically magic since commands for API communication be automatically generated. We will create a small blog as an example project. Basic knowledge of PHP, Composer, and Laravel are assumed.

Installation

Let’s start by launching a simple Laravel project. This process can vary depending on your operating system, so please refer to the official documentation [1]. In the next step, we add the framework Lighthouse using Composer. When using Laravel Sail this should happen in the shell (sail up -d && sail shell). You can see the procedure using Composer in the following:

composer require nuwave/lighthouse
php artisan vendor:publish --tag=lighthouse-schema

IPC NEWSLETTER

All news about PHP and web development

 

The second command generates a schema in the graphql directory. This describes the interface of GraphQL and at first, it contains two queries to read data from the model User. We will edit this file later. Next, install the GraphQL Playground. This is the best way to test the GraphQL interface:

composer require mll-lab/laravel-graphql-playground

This must also be set up, since we store the data in the MySQL database.

In the beginning, there is the data structure

For this example project, we’ll use a data structure that’s as simple as possible. We will use the following models:

  • Blog will contain a blog entry.
  • Author will represent a blog author.

We create the Model files and the Migration files, keeping them simple. We create, or rather prepare, the Author first since we will want to refer to it in the blog. In our case, a blog has a single author:

php artisan make:model Author -mf 
php artisan make:model Blog -mf

Only a few columns in our migration files are required for the necessary tables, which you can see in Listings 1 and 2.

database/migrations/...create_authors_table.php
public function up()
{
  Schema::create('authors', function (Blueprint $table) {
    // The author only has one name, 
    // an ID, creation and update date
    $table->id();
    $table->timestamps();
    $table->string('name');
  });
}

database/migrations/…create_blogs_table.php
public function up()
{
  Schema::create('blogs', function (Blueprint $table) {
    // The blog has a title, content, and author
    // an ID, creation and update date
    $table->id();
    $table->timestamps();
    $table->string('title');
    $table->mediumText('content');
    $table->foreignIdFor(\App\Models\Author::class);
  });
}

With php artisan migrate we can write the changes into the MySQL database. Now we have a relationship between Author and blog in the model (Listing 3), with something important for Lighthouse to take note of: The return type must be specified in the relationship function, or else Lighthouse cannot automatically recognize relationships (Listing 4).

app/Models/Author.php
namespace App;
 
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\Relations\HasMany;
 
class Author extends Model
{
  public $fillable = ['name'];
 
  public function blog() : HasMany{
    return $this->hasMany(Blog::class);
  }
}

app/Models/Blog.php
namespace App;
 
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\Relations\BelongsTo;
 
class Blog extends Model
{
  public $fillable = ['title','content'];
 
  public function author() : BelongsTo{
    return $this->belongsTo(Author::class);
  }
}

This will give us a simple data structure we can experiment with. Optionally, we could create factory and seed classes to fill the tables with fake data. However, since we want to insert new entries into the tables with GraphQL later, we don’t need this right now.

 

First steps with the GraphQL interface

After defining our basic data structure, let’s open the GraphQL Playground at the URL http://localhost/graphql-playground. In the beginning, queries for users already exist in our schema, so let’s test if the playground works. After opening it, let’s write an opening curly bracket in the left field. This is shorthand for query QUERYNAME {} and means “I want to execute a query” Within this mode, pressing CTRL + SPACEBAR will open autocomplete. It should already show suggestions for all permitted queries (Fig. 1).

Fig. 1: Suggestions of all permitted queries

As a first query, we receive a list of all users. For this, users must be selected in the auto-completion. Alternatively, this can also be written out manually. Now we have to define which of the possible attributes we want to have back. This is GraphQL’s strength. As users of the API, we can greatly reduce traffic by specifying what we need. To select potential answers, we need to write a curly bracket again and activate autocompletion by pressing CTRL + SPACEBAR. Or, we can click on Docs in the right margin to see what’s possible there. Bit by bit, we build up our query (Listing 5).

query getFirstOfUsers {
  users {
    paginatorInfo{
      total
      currentPage
      lastPage
    }
    data {
      id
      email
    }
  }
}

users is not rendered completely by the @pagination directive, but with a scrolling function.

Therefore, in the type paginatorInfo you can get back the current page and the total number of entries. The return value is moved to data.

Hello World

We will now rewrite the schema and adapt it to our needs. Data writing is done by mutations; data query is done by queries. Open the file /graphql/schema.graphql and first, define a query that executes PHP code we wrote ourselves and returns its value as a response. In the schema, we enter it as:

type Query {
  hello(name: String): String!
}

This adds the query hello, guaranteeing that the response is always a string. The call sign marks the answer as “not null”. So, there will always be a string, and null will never be an answer. Additionally, we define the argument name. This is optional, as string has no callsign. Now, a string can be given to the name field when querying. But first, we create the response class /app/GraphQL/Queries/Hello.php using the shell. Now a basic framework has been created.

php artisan lighthouse:query Hello

In this basic framework, we can write PHP code that has an ending Return value corresponding to the specified type of queries. The transferred parameters are checked or in a checked state in the second argument’s array. The first argument contains data about the parent element, which in our case, is empty (Listing 6).

namespace App\GraphQL\Queries;
 
class Hello
{
  public function __invoke($_, array $args)
  {
    // Return the name or ‘World is the name is
    // not set
    return ($args['name'] ?? 'World') . '! ';
  }
}

Now, we can test this query in the Playground. We will still submit the name argument:

{
  hello(name: "Tim")
}

The answer we receive is:

{
  "data": {
    "hello": "Tim!"
  }
}

Providing models with GraphQL

Now we want to work with our own data model. For this, we’ll provide a list of all authors and blogs, as seen in Listing 7.

type Query {
  hello(name: String) :String!
    authors: [Author!]! @all
    blogs: [Blog!]! @all
}
 
type Blog {
  id: ID!
  title: String!
  content: String!
  author: Author!
  created_at: DateTime!
  updated_at: DateTime!
}
 
type Author {
  id: ID!
  name: String!
  created_at: DateTime!
  updated_at: DateTime!
  blogs: [Blog!]!
}

The square brackets stand for “an array of”. So, we want to create a list of authors and blogs, where no entry is null and at least one empty list is returned (Remember: the call sign stands for “not null”). Authors and blogs are defined with an additional one type each. This also lets us describe their relationship with each other. Lighthouse provides various directives that can change the function of the query or determine how the result is generated. These directives can be understood as markers that all start with an @ symbol. Here, we use @all to automatically populate the types with their associated models. Lighthouse will then try to infer the associated Model class based on the return value’s name. Here, the types are named Author and Blog. So, just as with the Model classes, this means that no additional info is needed. We can test our scheme directly in the Playground after saving it and, if necessary, fix any bugs.

IPC NEWSLETTER

All news about PHP and web development

 

Modify Models with mutations

Finally, let’s add the ability to add a new entry for a model using GraphQL:

type Mutation {
  createAuthor(name: String!): Author! @create
}

This happens automatically with the directive @create. The return value determines the model that will be created. All arguments are submitted to the model directly before saving. There are similar directives for update, delete, and upsert. Lighthouse also provides directives for validation and the documentation gives many examples of this. If the number of arguments becomes very large, it’s possible to collect them all in a separate type. Then, the directive @spread distributes the inner type as separate arguments to the function. We will exploit this in a moment.

type Mutation {
  createAuthor(input: CreateAuthorInput! @spread): Author! @create
}

input CreateAuthorInput {
  name: String!
}

We can execute the following query to test creating an author: 

mutation { 
  createAuthor(input: { 
      name: "Tim" 
  }) {
    id
    name
  }
}

In addition to this method, the return values are also defined at the end. In this case, we want to get back the ID and name. Lighthouse also gives us the possibility to create nested models at the same time. This allows us to save queries later on for more complex structures. This is achieved with a special syntax that varies depending on the relationship. The Lighthouse documentation explains this in detail [2]. As a final example, let’s generate a blog post like this and a matching author for it. First, we add to our schema, as seen in Listing 8.

type Mutation {
  createAuthor(input: CreateAuthorInput! @spread): Author! @create
  createBlog(input: CreateBlogInput! @spread): Blog! @create
}
 
input CreateBlogInput {
  title: String!
  content: String!
  author: CreateAuthorBelongsTo
}
 
input CreateAuthorBelongsTo {
  connect: ID
  create: CreateAuthorInput
}
 
input CreateAuthorInput {
  name: String!
}

Then, we test the mutation in the Playground again (Listing 9).

mutation {
  createBlog(input: {
    title: "Our Title"
    content: "Our first entry with GraphQL"
    author: {
      create: {
        name: "Tim"
      }
    }
  }) {
    id
    title
    content
    author {
      id
      name
    }
  }
}

If it’s successful, we get back the generated entry, as shown in Listing 10.

{
  "data": {
    "createBlog": {
      "id": "1",
      "title": "Our Title",
      "content": "Our first entry with GraphQL",
      "author": {
        "id": "1",
        "name": "Tim"
      }
    }
  }
}

All blog entries including the author can be returned with the following query:

query blogs { 
  blogs { 
    id 
    title
    content
    author {
      name
    }
  } 
}

YOU LOVE PHP?

Explore the PHP Core Track

 

Working with Lighthouse in the Frontend

In the following, we will use GraphQL automated in the React frontend. To do so, we must first generate the React framework with the slightly older Laravel UI. For npm, we must install Node.js. This is already the case in the container of Laravel Sail, for example. 

composer require laravel/ui
php artisan ui react
npm install && npm run dev

To view our React component, we update our welcome.blade.php as seen in Listing 11.

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="utf-8"/>
  <link rel="stylesheet" href="/css/app.css"/>
</head>
<body>
  <div id="example"></div>
  <script src="/js/app.js"></script>
</body>
</html>

After updating the file with npm run watch, we can view the sample component by calling http://localhost. With Apollo, we access our data on the client side:

npm i @apollo/client graphql

Now, we can use Example.js to write our code. First, we need to initialize Apollo and build a basic framework for our application. We want to address the Hello query and directly display the server’s response (Listing 12).

import React, { useState } from 'react'
import ReactDOM from 'react-dom';
import { ApolloProvider, ApolloClient, InMemoryCache } from '@apollo/client';
 
const client = new ApolloClient({
  uri: 'http://localhost/graphql',
  cache: new InMemoryCache()
});
 
function Example() {
  const [name, setName] = useState("");
  let answer = 'No answer from the server'
  return (
    <div className="container">
      <input
        type="text"
        value={name}
        placeholder="Name"
        onChange={e => setName(e.target.value)}
      />
      <div>
        { answer }
      </div>
    </div>
  );
}
 
export default Example;
 
if (document.getElementById('example')) {
  ReactDOM.render(
    <ApolloProvider client={client}>
      <Example />
    </ApolloProvider>, document.getElementById('example'));
}

With npm run watch, we compile the code and afterwards, we call the page in the browser. Now we can add our query, which we will specify and test in the playground. Here, we define expected arguments, in this instance: name as “to hand over”.

query hello($name: String) {
  hello(name: $name)
}

Initially, this seems a bit awkward. But errors are clearer if the query is named (in this instance, the first hello is the name) and arguments (in this case, $name) can be used multiple times in a query. In the application, we use the hook useQuery from Apollo. The advantage is that it returns the load status and potential errors directly, as shown in Listing 13.

import { ApolloProvider, ApolloClient, InMemoryCache, useQuery, gql } from '@apollo/client'
 
// ...
 
function Example() {
  const [name, setName] = useState("");
  const { loading, error, data } = useQuery(gql`
    query hello($name: String) {
      hello(name: $name)
    }
  `,{
    variables: {
      name
    }
  });
 
 
  return (
    <div className="container">
      {error}
      <input
        type="text"
        value={name}
        placeholder="Name"
        onChange={e => setName(e.target.value)}
      />
      <div>
        {loading && 'Think about...'}
        {data && (data?.hello ?? 'No answer from server') }
      </div>
    </div>
  );
}

The result of the server query is in data, where we want to deliver the contents of hello (the inner hello). With useQuery, the query is automatically fired off when it’s called for the first time or when the variables change. Besides useQuery, there is also useLazyQuery, where fetching data has to be manually triggered. For mutations, there is useMutation, which is similar in functionality as the previous two. As the name suggests, it handles mutations. There is even the possibility of uploading files.

 

Conclusion

Lighthouse PHP is very good for using GraphQL with Laravel. It saves a lot of work, especially in the PHP code since you do not have to write router, controller, and co. for every query.

Links & Literature

[1] https://laravel.com/docs/8.x/installation 

[2] https://lighthouse-php.com/master/eloquent/nested-mutations.html

The post Simple and practical: Laravel with GraphQL appeared first on International PHP Conference.

]]>
The great PHP expert check: That was 2021 – what comes next? https://phpconference.com/blog/the-great-php-expert-check/ Wed, 05 Jan 2022 15:56:04 +0000 https://phpconference.com/?p=83007 Another turbulent year in the grip of the pandemic is behind us. We talked to our PHP experts about 2021—how did the language develop? What were the big milestones? And what will the coming year bring?

The post The great PHP expert check: That was 2021 – what comes next? appeared first on International PHP Conference.

]]>

Corona, home office, and finally on-site conferences again: What made 2021 special for you?

Carsten Windler: My first talk at the International PHP Conference in July 2021 was also my first remote talk due to Corona. It was a rather strange experience to talk into the camera the whole time without any direct feedback from attendees. But then IPC Munich in October this year was all the more enjoyable, since it almost felt like a “normal” conference. I also had the special pleasure of being able to contribute a keynote on my favorite topic: Green IT.

Susanne Moog: I’ve been working completely remote for many years—in 2021, many of my colleagues also switched over. For me, this was a step forward, since much more communication takes place over digital channels now; exchanging ideas has improved significantly, and now we can also hold “occasional communication” remotely, as we’d normally do when getting a coffee together. In addition, I participated in various online conferences, and—when the infrastructure was well done—had a lot of fun in “hallways” and virtual bars. Conferences in virtual worlds cannot replace the real experience, but they were, in my opinion, a good substitute in the current circumstances.

Roland Golla: As a conference topic, I like seeing a talk that has a story. It encourages me to think outside the box. But that’s not possible remotely, and it’s hardly fun for the speakers either. That’s why hybrid conferences like IPC are simply great, for unbiased conversations, beer, and humanity too. That’s what we live for—new and old friends in the PHP family.

Mark Story: There were a few things that made 2021 special for me. I was finally able to take the time to learn some new technologies. It gave me the opportunity to learn some C to design my first circuit board. It was also a good year for my open source work. The CakePHP team was able to ship a new version with a lot of features. I also took the time to learn Inertia.js and build an application that I use daily.

Jörg Moldenhauer: 2021 was the second Corona year. There was a pleasant period in the summer when the numbers dropped and we thought that normalcy might return. The conferences I attended were held online, just like last year. However, it was nice to be able to attend some training sessions on-site again. Unfortunately, there is not much left of the good times now and it looks like 2022 will be a Corona year too.

Vitalij Mik: This year, I was allowed to get into a new topic professionally: Shopware. Facing completely new requirements is really extremely exciting. Even though I already knew Symfony, I had to take a closer look at Shopware. It was exciting to still find projects that are so challenging after 10 years of professional experience. I think that’s what makes the job of being a programmer; you always have some new kind of technology or framework and it never gets boring.

Christian Dangl: Keeping things in perspective, 2021 was not very different from the previous year. However, it is clear that contact with others became increasingly important in 2021. Whereas at the beginning of the pandemic in our industry, we thought now we can “finally” work at home in peace. This changed increasingly for me.

Conversations with colleagues where you can ignore the time are now a rarity, if you even make it to the office at all. But those are exactly the special ones for me. Moments when the world seemed normal, days when our office was (half)full of colleagues, some of whom we only know digitally. 

Unfortunately, I haven’t been able to attend any on-site conferences yet, but I’m looking forward to them as if I was a little kid. Personally, I was lucky enough to be able to hold many talks and webinars—but unfortunately only digitally. While many events have taken a huge qualitative step towards digitalization (kudos to Shopware Community Day and the team for the broadcast), you look forward to events that you can really “feel”. 

But I think there are also many positive things that can be found in our world. Particularly in the area of digitization, many areas and markets were forced to finally replace outdated systems. So the impetus has finally been given. From that point of view, I would say that I’m glad that people have also learned to appreciate the small social things in life, while our world finally seems to be turning in the area of digitalization. 

P.S: I’ve learned that even in a home office, sometimes a fallback line for your Internet connection wouldn’t be so silly.

IPC NEWSLETTER

All news about PHP and web development

 

How would you rate the past ~12 months for both the PHP language and the community? What was positive? What was negative? Were there any personal highlights?

Windler: It was nice to see that the PHP Conference didn’t fall asleep during the pandemic. The organizers and the community showed a lot of creativity and stamina! Above all, I think the idea of hybrid conferences worked well. The current development of PHP is still great. It’s just wonderful to see how the language continues to evolve steadily. PHP 8.1 again brings us very interesting new features, like enums or readonly properties, which allow us to write leaner, more robust code. I’m very excited to see what future releases will bring us. 

But it is a real pity that Nikita Popov will no longer focus on PHP in the future. The language owes him a lot! However, I am all the more excited about the newly founded PHP Foundation, which includes some well-known names in the community. Therefore, I think the future of PHP is still secure.

Moog: More and more projects are moving to PHP 8.0, which has been a real highlight for me since I really enjoy using many of PHP 8’s features, but I’m always dependent on how compatible the packages I use are. With the last few PHP releases—both 7.4, 8.0, and 8.1—PHP has fixed a lot of minor inconveniences and is becoming more and more enjoyable. In addition, static analysis tools like PHPStan or Psalm are getting better and better and you can automatically upgrade many things with the help of Rector. Also, I should mention composer 2 here; the performance improvements were impressive.

Golla: Mental health development was definitely negative. Some of our close friends were affected badly, to the point of staying in an acute care unit. I am now taking up this topic again. I have a certain standing in the community and I’ve become aware of my responsibility again. The moment we can and must go back, the truth will become apparent. Many colleagues prefer to turn off their camera during video calls. There are reasons for that and it will take a lot of time to heal.

On the positive side, there are now more real remote jobs. Before Corona, you always had to come into the office, for reasons. Now, even clients like our funeral home have setups for video sessions. They didn’t even know what the Internet was before. That’s real progress. On the software development side, of course, things have changed: RectorPHP is a spectacular project that I really like. Personally, I’m getting more and more involved with PHPUnit, which is also developing spectacularly. PHP8 is also great—you can hardly keep up with it this year.

Story: The last 12 months certainly had some ups and downs. PHP 8.1 was released as well as a plethora of new libraries like Pest and MiniCLI. Nikic pulled out of the project and the PHP Foundation was formed. I am confident that the Foundation will find a way to keep PHP alive for a long time to come.

Moldenhauer: My highlight was trying out PHP 8 in practice. Little things like the named parameters quickly proved to be useful features that you wouldn’t want to do without. It’s also nice that attributes are now firmly integrated into the language and no longer have to be recreated via annotations in comments. Symfony version 6 was released at the end of the year. It didn’t bring any groundbreaking innovations like version 4 did, but it added some useful new improvements for existing components. By the way, all of them are also included in the LTS version 5.4, which was released at the same time.

Mik: Like every year, the language only got better and better. PHP 8.0 is already a year old, but the majority of people probably still use PHP 7.4. It’s a shame that generics still haven’t been implemented. But I think there is a reason for it because if it was easy, then we’d have them already. My personal highlight was Hacktoberfest and that I managed to fix some bugs in Shopware. 

Dangl: The release of PHP 8.0, which is now a year ago, ushered in a new era of sorts for me. While versions 7.x felt very much like stabilization to me—catching up with the basic functionality of other programming languages—with 8.x you can clearly see that now PHP is modern and exciting. That said, one of my highlights was definitely when I learned about the official ENUMS making their way into PHP 8.1. I think the neighboring town could hear me when I shouted “finally!” with joy. Sometimes a highlight is just the little things.

What also makes me very happy is the release of PHPStan 1.0. I don’t know anyone who doesn’t use it actively in at least one of their projects or plans to. It’s definitely one of the most important tools whose official versioning finally arrived. And for the 1st birthday, Ondřej gave us a new task—Level 9!

 

From a PHP point of view, what are your future wishes? Where is there a need for improvement, or what are you missing?

Windler: I would be happy to see native support for asynchronous calls. Of course, there are already excellent extensions and libraries in this regard, but I believe that the topic would receive more attention and more use as a fixed component of PHP.

Moog: I would like to see more standards. The PHP-FIG already does a good job of defining interfaces that can be implemented by different systems to ensure interoperability. These interfaces should be used by more systems. Composer made it very easy to use packages from different developers; to me, the next logical step is more PSR standards that make interoperability even easier. In the open source area, there is still a lot of potential for cooperation. Since a lot is done in our spare time, it makes sense to work as efficiently as possible and for me, that means cooperating and being able to share implementations.

Golla: Young talent is not addressed, integrated, and promoted. The greed for full-stack developers who just work without a training period is bad. We have to become much more sustainable. Hundreds of people who want to do apprenticeships are neglected. What’s also bad is price gouging among agencies. We can barely breathe in the legacy swamp and get no spare time for open source and our passions. There is a lack of visibility as to why software quality is worthwhile.

Story: PHP is in a great position. Progress in recent PHP versions has been tremendous. But this development has not been in vain— incompatible changes have made it harder to maintain libraries and frameworks that support multiple PHP versions.

Moldenhauer: PHP could go one step further with typing. Typed arrays and variables are still missing. It would also be nice if strict types were automatically enabled in one of the next major versions, even if that would break many old applications and libraries. You have to keep up with the times. Strict typing should be standard by now.

Mik: In PHP, we work a lot with simple data structures, such as arrays, but these are not implemented in a particularly performant way. It might be desirable to be able to define data structures. We have classes, but creating them and including them via autoload is a bit of a hassle. Private classes, known from Java or structures from C++, would certainly be interesting. A structure definition that I can quickly type in a class, like an array with typehints, would be great. But that will probably never come.

Dangl: I would like to see PHP not only become modern, but also continue to work on stabilization and fundamentals to get closer to languages like C#, etc. (see ENUMS). For example, I find it practical to work with type declarations in PHP as well. But I also wish that blurring this, with the possibility of specifying “mixed” in return, or something similar, finds its way in less often. Maybe this is simply due to my own preferences as a programmer, but I’m a big fan of having as strict fundamentals in a programming language as possible, leaving little room for interpretation. I also hope that there aren’t too many more modern short syntax options integrated allowing you to do all sorts of things in one line. Again, maybe I’m old school, but I think this can lead to unnecessary errors.

Have a look into the crystal ball: What will 2022 bring us? What are you looking forward to?

Windler: The increasingly urgent fight against climate change won’t stop with us software developers. The aim is to drastically reduce global power consumption and CO2 emissions. PHP still drives a large part of web pages on the Internet and could play an important role. But this would also mean that we have to think more about our software’s efficiency. Maybe standard PHP with a few handpicked Composer packages are enough for your next project instead of the usual heavyweight framework?

Golla: In 2022, I will have my first own employee at TESTIFY—Agency for Tests. That’s super exciting and wonderful. That’s why I’m heavily involved in business management right now. The market is also exciting. At the beginning of the pandemic, many people’s working hours were immediately reduced. It wasn’t clear that it would take so long. But before that, we were told that we would be allowed to clean up once we had time. And the freelancers? Even before Corona, they had an astonishingly good order situation and hourly rates. This triggered a trend: occasionally some freelancers have joined forces. Employee security took a very serious crack. That’s an exciting starting point. Good people left teams. They’ve been stuck in really bad legacy projects for years and don’t have any current know-how. Too expensive and too bad. I personally have some development scenarios. But that’s not for here. I’ve had this topic come up before. [1] https://blog.nevercodealone.de/webdesign-agenturen/ Yes, it will certainly be an exciting year and I’m looking forward to it.

Story: I’m excited to see what the PHP Foundation has in store once it gets going. I’m also looking forward to the return of real events and conferences that I’ve missed over the past two years. I also hope to see the public slowly lose interest in cryptocurrencies and related digital assets as the scams continue to pile up. I’d like to see the great minds and capable engineers working on these technologies find solutions that help realize the ambitions of a decentralized web, without the underlying fraud.

Moldenhauer: I’m looking forward to PHP 8.1 and native enums. It’s already been released this year, but upgrading all applications will probably take until next year. Then I can finally work with it effectively.

Mik: I think Fibers and Attributes, which we now have in PHP, will be powerful tools that will significantly increase PHP’s performance in the near future. With Fibers you can run multiple processes in parallel and get the result later. API libraries like Guzzle or database libraries like Doctrine can certainly benefit from this. Since PHP 7.4 reaches its end of life in 2022, PHP 8 will also rely more on attributes instead of annotations, which could also have an impact on performance. In any case, I’m looking forward to PHP gaining even more performance.

Dangl: Well, I think there’s no way to get around mentioning the new PHP Foundation here. It’s a step that, at the time, seems absolutely right to me. Nothing is a better turbocharger for a project than having developers who can work on it full time. For me, it remains to be seen how quickly PHP will continue to develop and what features we’ll see in the future. But who knows, maybe in the not too distant future we will already be using PHP 9.0. Apart from PHP, I see a clear trend in the area of testing. Frameworks and tools like Cypress that release new features and versions at lightning speed help make testing and QA even easier and more palatable. So I’m not surprised that in 2021, testing and pipelines attracted a lot of attention again. From there, I think 2022 will see another boost in the automated QA space.

YOU LOVE PHP?

Explore the PHP Core Track

 

Our PHP Experts

Carsten Windler has been a PHP developer for many years and as a development manager, he has supervised various teams in different companies and industries. His own experiences with bad code led to an interest in software quality and refactoring legacy code. When he is not working on his hobby projects in his spare time, his daughter usually determines his daily routine.
Susanne Moog has been part of the TYPO3 project for over ten years. She originally studied media economics, but quickly realized that programming was more than just a hobby and started working in IT. She works at TYPO3 GmbH and Team neusta as a Scrum Master, developer, and CTO.
Roland Golla is a PHP trainer for testing and refactoring and founder of Never Code Alone. He founded TESTIFY – Agency for Testing, a start-up for outsourcing E2E testing, and implements CMS projects with the team Die Websprinter using the Symfony Fullstack CMS Sulu.
Mark Story is a Staff Engineer at Sentry and an open source enthusiast based in Toronto, Canada. He is one of the main developers of CakePHP and has long been involved in a number of other PHP and Python projects. Outside of work, Mark is a father of three, plays Magic the Gathering, and enjoys cycling and many winter sports.
Jörg Moldenhauer holds a degree in media informatics and has been working as a web developer for Key-Systems GmbH since summer 2018. There, he creates web applications, APIs and microservices based on Symfony. Before that, he spent five years as a full-stack developer in an advertising agency, implementing numerous projects in front- and backend.
Vitalij Mik has been working as a PHP developer since 2011 and runs a YouTube channel exclusively about PHP.
Web: https://www.youtube.com/c/VitalijMik
Christian Dangl is Head of Technology at the Shopware agency dasistweb GmbH. His focus is on system architecture, software architecture, automations, integrations, and QA. He is also the managing director of Live Score GmbH, a scoreboard playout software for sports broadcasts, enjoys giving training, and gives the odd talk at webinars and conferences.

The post The great PHP expert check: That was 2021 – what comes next? appeared first on International PHP Conference.

]]>
Frameworkless: Sometimes, less is more https://phpconference.com/blog/frameworkless-sometimes-less-is-more/ Tue, 24 Aug 2021 10:01:56 +0000 https://phpconference.com/?p=82584 Frameworks have been our faithful companions for many years. They are partly responsible for the success of our favorite language. They have their advantages, however, they also sometimes have substantial disadvantages. So why not simply create your next web service without a framework? We will forego barely any conveniences and in passing, we will learn to understand our applications on a deeper level.

The post Frameworkless: Sometimes, less is more appeared first on International PHP Conference.

]]>

PHP started out as a simple scripting language for generating HTML documents. As an open source project with many different contributors, it developed into the world’s most popular language for web applications. This development also led to its many shortcomings and inconsistencies. In the beginning, there were no uniform coding standards or best practices. Every developer stayed in their own lane.

Clear the stage

More and more, PHP developed into a language to be taken seriously, adapting various aspects of other programming languages. As applications became more professional, the need for standardization within the developer community grew. In the mid-2000s, many frameworks sprouted up to address these issues. Most frameworks from this time are still with us today, such as Symfony, Yii, CakePHP, and the Zend Framework (now renamed Laminas). Laravel joined relatively late in 2011, but it quickly became top dog. Because of their presence in the PHP world, we no longer question the use of frameworks. If a new project is lined up, the choice of framework is discussed, or the skeleton generator is started up. In the following, we will explain why this isn’t always a good idea and what alternatives there are.

IPC NEWSLETTER

All news about PHP and web development

 

Where frameworks score points

Let’s first take a closer look at the advantages of using frameworks. Programming beginners in particular can get started quickly thanks to a framework’s documentation, fixed structures, and rules. Developers who already have experience with other frameworks can usually familiarize themselves quickly. The framework community is supported by literature and tutorials. Like-minded people meet at conferences and get further support from online forums or external consulting. A few popular frameworks even offer paid services to help make software development even easier. Security vulnerabilities in web applications have given PHP a bad reputation. Often, errors were not due to PHP itself; frameworks were necessary to compensate for programming negligence (or ignorance). Security features such as escaping user input were available without a lot of programming effort, achieving wide acceptance. Components contained in frameworks meet high-quality standards and are well-coordinated. Many features can be activated with just the appropriate configuration files. This, along with a large number of available helper functions, leads to high development speed.

The downside

Since a framework has to be prepared for as many different use cases as possible, it cannot avoid considerable overhead. This leads to increased resource consumption or reduced execution speed compared to customized applications. Because of strong communities, more and more developers concentrate on their favorite frameworks. People often even call themselves Laravel or Symfony developers instead of PHP developers. But focusing on frameworks can lead to overlooking interesting developments happening outside of the framework ecosystem. One serious disadvantage is the update risk. Major updates usually come with breaking changes, which can cause high refactoring efforts. If the framework was not implemented correctly according to specifications (for instance, workarounds or bad practices have spread throughout code via copy and paste), then updating can be difficult or even impossible. This can endanger or even prevent long-term survival. It should not be underestimated, especially in commercial applications.

High development speed is also achieved by using antipatterns. For instance, Laravel Facades (box: “Antipattern: Laravel Facades”) binds code very closely to the framework [1]. This makes exchanging individual modules much more difficult. Unfortunately, this usually only becomes obvious after the project reaches a level of maturity or size, and the refactoring effort has increased disproportionately.

Antipattern: Laravel Facades


Laravel Facades are a good example of how wishing for simpler software development can lead to serious problems in the long run. Facades give developers direct access to the most important framework components anywhere in the code, without having to worry about dependency injection. Justifiably, some people may think of the (rather scorned) Singleton or Registry Pattern. A constructor with three or four parameters will be quickly exposed as a code smell. But if the dependencies can be integrated everywhere in the code on the fly, then uncontrolled growth is pre-programmed in the truest sense of the word. Problems arise especially if database schema are hardcoded while using the DB Facade Details. The following example from a controller is directly taken from the official Laravel documentation [2]:

$users = DB::select(‘select * from users where active = ?’, [1]);
return view(‘user.index’, [‘users’ => $users]);

In Laravel jargon, this syntax is expressive and elegant. You might be divided on that opinion. What happens if a column with the name deleted still has to be taken into account? That’s correct: all calls of this type must be found in the code and adapted. Even worse is having to replace the table with a microservice, which is not uncommon during later project phases. The users table is an illustrative example: If you took a little longer to write a user service at the beginning of the project, this wouldn’t be much of an issue. Eventually, only the logic in the service class itself would need to be adapted. Without this, a lot of manual work is required, because it rarely stays with simple queries.

Of course, using Laraval Facades (or similar approaches in other frameworks) is faster at first, making it very popular. Years later, after developers have left the company, their replacements become disillusioned when the aforementioned problems occur. It’s a prime example of technical debt [3].

YOU LOVE PHP?

Explore the PHP Core Track

 

Let’s go frameworkless!

Especially when it comes to popular architecture patterns such as microservices, where the focus is on applications that are as performant and simple as possible, you should ask yourself if there are alternatives. Look at the long-term disadvantages of using frameworks. But should we program everything by hand again, like in the past? Thankfully not, since there are many freely available components (usually in the form of Composter packages) that can do most of the work for us. However, from now on we have to write the manageable Glue Code that integrates and configures all of the packages. We will get help from PHP-FIG [4]. It originally began as a way to improve cooperation between frameworks. Over the years, it’s defined many basic standards, called PHP Standard Recommendations (PSR).

PHP-FIG is not without some controversy [5], but the resulting PSRs are now fixed components of the PHP ecosystem. With Composer, the initial PSR-0 (autoloading, in the meantime PSR-4) forms the cornerstone for most PHP applications and all modern frameworks. The PSRs cover important parts of the architecture spectrum. Additionally, at least one mature package is available for each PSR, as shown in Table 1.

Number Description Package or tool
PSR-3 Logger Interface monolog/monolog
PSR-4 Autoloading Standard composer
PSR-6 Caching Interface symfony/cache
PSR-7 HTTP Message Interface laminas/diactoros
PSR-11 Container Interface thephpleague/container
PSR-1 and PSR-12 Coding Standards squizlabs/PHP_CodeSniffer
PSR-14 Event Dispatcher thephpleague/event
PSR-15 HTTP Handlers middlewares/awesome-psr15-middlewares
PSR-17 HTTP Factories guzzle/psr7
PSR-18 HTTP Client guzzlehttp/guzzle

Table 1: The most important PHP standard recommendations and package examples

One big advantage of using PSR-compliant packages is their interchangeability. If a package is no longer being developed or cannot be used for other reasons, it’s likely that another package can be used without having to refactor large parts of the application. Recommendations are based on established best practices, adding to the quality of our applications. Another advantage is that frameworks such as Laminas or Symfony are now basically component-based. The excellent modules can also be used when completely decoupled from the framework. We will make use of this in the following example. Although we don’t want to use a complete framework, there isn’t anything stopping us from using individual components.

A microservice without a framework

Let’s leave theory behind and have a look at a practical example: a simple microservice, which is certainly more common in this basic configuration. CRUD operations (Create, Read, Update and Delete) can be used to create, query, change, or delete data records. The actual range of functions is not important, as we are only interested in demonstrating the basic principle. There are many aspects among the PSRs that we need for our microservice: Autoloading, logging, DI Container, and everything about HTTP. Even a consistent codestyle is provided. Figure 1 shows the most important components of this service and which packages can be used as examples.

Fig. 1: A simple microservice without using a framework

Let’s look at the packages used here in detail:

  • HTTP Request and Response: We entrust the idiosyncrasies of an HTTP request and its associated response to the laminas/laminas-diactoros component from the Laminas framework. It does outstanding work and is PSR-7 compliant.
  • Router: This central component assigns the appropriate controller (not to be confused with the controllers from the MVC pattern) to the request, based on URL and the HTTP verb used (for example: GET). There is no separate PSR here; PSR-7 and PSR-15 provide a solid foundation. Thephpleague/route package is small, fast, and sufficiently flexible for our purposes.
  • Authentication: Unless our web service will be used exclusively in a private network, then authentication is essential. PSR-15 compliant auth middleware can be found in middlewares/psr15-middlewares, which can be integrated in only a few steps. For our purposes, simple basic authentication is enough, but other concepts, like JWT Tokens, can also be implemented in the same way.
  • Database: For our example, we are using a classic MySQL database. doctrine/dbal provides a simple, yet powerful abstraction layer. Of course, using another database, such as MongoDB, is also possible. PHP packages exist for every established database, but a PSR for databases does not exist (yet?).
  • Logging: The PSR-3-compliant package monolog/monolog is almost a standard in its own right. It’s worthwhile to rely on monolog from the start because from logging, to local files, to cloud storage, an adapter is available for every case and is configured with just a few lines of code.
  • Glue-Code: Even without a framework, we don’t want to forgo code quality and best practices. For dependency injection, we use the container thephpleague/container (PSR-11). Composer (PSR-4) is responsible for autoloading and as a coding standard, we choose PSR-12 or its predecessor PSR-2. squizlabs/PHP_CodeSniffer handles automatic checking.
  • Tests: phpunit/phpunit is a solid choice for Unit Tests. But you can also use other testing tools like Atoum, Codeception, or phpSpec.

IPC NEWSLETTER

All news about PHP and web development

 

 

All components shown here are mature, regularly maintained, and have hundreds of thousands, or often millions of downloads. We do not use packages with a questionable future. In the worst case scenario, if a package is no longer maintained, it can be easily replaced thanks to standardization. The example in Listing 1 shows how easy it is to use packages of different origins together, thanks to the PSR interfaces. In contrast to the example above, here we use symfony/dependency-injection from the Symfony framework to demonstrate its flexibility. You would only have to adapt the section where the services and their dependencies are configured. The Symfony component works a bit differently here since the configuration of the DI container is not covered by PSR-11. Other than this, the component fits seamlessly and harmonizes with the router.

// Psr\Http\Message\ServerRequestInterface
$request = Laminas\Diactoros\ServerRequestFactory::fromGlobals();
// Psr\Http\Message\ResponseFactoryInterface
$responseFactory = new Laminas\Diactoros\ResponseFactory();
 
// Psr\Container\ContainerInterface
$container = new Symfony\Component\DependencyInjection\Container();
// Configuration of services and their dependencies (shortened)
$container->set(...);
 
$strategy = new League\Route\Strategy\JsonStrategy($responseFactory);
$strategy->setContainer($container);
 
$router = new League\Route\Router();
$router->setStrategy($strategy);
// Route Configurations (shortened)
$router->map(...);
 
$response = $router->dispatch($request);

More Code samples


You can find the accompanying sample microservice for this article in a GitHub repository [6]. There you will see the necessary glue code that pieces the above components together. The repository shows a few development stages of the service, organized in different branches. If you’re feeling brave, you can take a look at the branch of version 1, where the microservice was implemented solely using on-board PHP resources. Guaranteed to spark nostalgia or a spooky atmosphere.

A question of security

What about the security of frameworkless applications? Here, developers have an (even) greater responsibility, even if they use a lot of code from external packages that they cannot affect directly. From now on, we have to take care of package updates ourselves. But unlike with new framework releases, we won’t be notified about new updates through newsletters or similar things. Composer gives us composer outdated –direct, a necessary tool that can check which packages need to be updated. It’s recommended that you run the update check as a pre-commit Hook, or include it in your CI/CD Pipeline. While we’re at it, it’s also worth including the fabpot/local-php-security-checker package. It warns us about security vulnerabilities in unused packages. Regardless of whether or not a framework is used, the following is true: As soon as even just one line of custom code is written, the risk of security vulnerabilities increases. So, security measures such as penetration tests and security tools that automatically check code for potential gaps should be used. Other security measures can include team training in combination with mandatory code reviews.

Agnostic Code

Finally, let’s have a look at how we can make our code sturdier against outside interference. Frameworks and packages are constantly evolving – updates are simply unavoidable. But the framework agnostic approach can greatly limit the problem. In a nutshell, we should write our code so that we are directly using as few framework functions as possible. Business logic in particular can be easily incorporated into other applications, even if they are based on a different framework. Figure 2 shows the starting situation with a close link to the framework.

Fig. 2: A close link between the framework and your own code

Now, it’s a fallacy to assume that framework agnostic code will allow us to easily migrate an application to another framework. But, the approach gives us a big advantage: We organize and write our code in a way that minimizes dependencies on external components. Ideally, we wrap calls to classes and functions outside our scope in wrapper or service classes. Mind you, “outside our scope” also means the framework we are using, since we should never (really, never!) modify its code ourselves, no matter how tempting. Figure 3 shows how we can significantly reduce dependencies on the framework code by introducing a service class.

Fig. 3: Decoupling the framework code

Even completely replacing the DB component would only cause changes within this intermediate layer, and not in many individual places throughout the code. You should use this approach even without a framework, since we must also never (!) directly modify the code of any packages we use. For example: We use the SDK of an external web service to handle user authentication, which includes user rights. However, instead of calling the functions of this SDK directly, we write another service class. The SDK will communicate with the web service only within this class. If the web service is changed later, or if breaking changes are introduced in the SDK, only the service class has to be adapted. Therefore, you could also speak of package agnostic code. One good negative example of this is the HTTP client Guzzle. In a few years, several major releases were published that were not compatible with previous versions. Using a wrapper class to encapsulate the Guzzle API would have saved many projects a lot of work and frustration. Anyhow: Guzzle is now PSR-7 compliant.

 

Conclusion

The advantages of frameworks are undeniable, especially for start-ups, where innovation speed can often determine success or failure. But this is exactly where the issue lies: During the first few years, the application grows unchecked. Soon, programming patterns become ingrained and become incompatible with newer versions of the framework. That can only be fixed with a very high refactoring effort. Still, frameworks are a good choice when it comes to rapidly creating an MVP or a monolithic web application that needs many views and forms. However, code quality standards and patterns should be prescribed and monitored in the team at an early stage. Additionally, regular software maintenance (updates, eliminating code smells, etc.) should begin as early as possible, before technical debt becomes too large and hinders software development.

For experienced development teams, the frameworkless approach could be a welcome alternative to frameworks for long-lived services. Of course, in the beginning, this will involve some extra work. But once the basic framework is created, it can be a blueprint for future services and save development time. Make sure you have sufficient documentation and solid test coverage right from the start, then nothing can stand in the way of a robust PHP application.

Links & Literature

[1] https://programmingarehard.com/2014/01/11/stop-using-facades.html 

[2] https://laravel.com/docs/8.x/database#running-a-select-query 

[3] https://www.martinfowler.com/bliki/TechnicalDebt.html 

[4] https://www.php-fig.org 

[5] https://phpthewrongway.com 

[6] https://github.com/carstenwindler/frameworkless 

The post Frameworkless: Sometimes, less is more appeared first on International PHP Conference.

]]>