<![CDATA[New Work Development - Medium]]> https://tech.new-work.se?source=rss----35cb8c78d3cf---4 https://cdn-images-1.medium.com/proxy/1*TGH72Nnw24QL3iV9IOm4VA.png New Work Development - Medium https://tech.new-work.se?source=rss----35cb8c78d3cf---4 Medium Tue, 27 Aug 2024 13:37:24 GMT <![CDATA[Protecting sensitive data in Elixir GenServers]]> https://tech.new-work.se/protecting-sensitive-data-in-elixir-genservers-fac4a8b0ae15?source=rss----35cb8c78d3cf---4 https://medium.com/p/fac4a8b0ae15 Tue, 28 Nov 2023 17:07:43 GMT 2023-11-29T09:32:21.598Z In Elixir, GenServers are a common way to maintain state and handle concurrent processes. However, when these GenServers hold sensitive data, such as credentials or personal information, it’s crucial to ensure this data is protected. Sensitive data, if exposed, can lead to serious security breaches, including data leaks and unauthorized access. These breaches can have far-reaching consequences, such as loss of customer trust, damage to your brand’s reputation, and potential legal liabilities.

In this blog post, we’ll explore two techniques to protect sensitive data in Elixir GenServers: implementing the Inspect protocol for structs and implementing the format_status/2 callback.

To illustrate this, let’s take a look at a GenServer that is handling some sensitive data. (I ended up writing a GenServer quite long for a blog post. However, I hope that this example can help to understand the different ways of hiding sensitive data in a GenServer, and the trade-offs involved in each approach)

Basically this GenServer acts like a diligent security guard managing a special “security token” that expires every 15 minutes. It doesn’t wait for the token to expire, but proactively starts a countdown to refresh the token just before expiration. When another process requests the token via the get_security_token function, it ensures the token is valid before handing it over. This creates a seamless cycle of token issuance, countdown, and renewal, ensuring a valid token is always available.

❯ iex security_token_manager.ex
Erlang/OTP 26 [erts-14.1] [source] [64-bit] [smp:10:10] [ds:10:10:10] [async-threads:1] [jit]

Interactive Elixir (1.15.6) - press Ctrl+C to exit (type h() ENTER for help)
iex(1)> {:ok, pid} = SecurityTokenManager.start_link()
{:ok, #PID<0.116.0>}
iex(2)> SecurityTokenManager.get_security_token()
"8QVrN1ohPPdiWHfnmEr+ln4VQ4Y="

While it effectively manages the lifecycle of security tokens, it does have a potential security concern. The GenServer stores sensitive data, such as the access_key, secret_access, and security_token, in its state. This data could potentially be leaked through logging tools when some error is raised for example.

Error log output from the SecurityTokenManager GenServer. The server is terminating due to a RuntimeError that occurred while trying to fetch a security token. The error details, including the function calls leading to the error and the state that reveals sensitive data, are displayed.

Or via the :sys.get_status/1 function, which can access the state of a running process.

iex(3)> :sys.get_status(pid)
{:status, #PID<0.116.0>, {:module, :gen_server},
[
[
"$initial_call": {SecurityTokenManager, :init, 1},
"$ancestors": [#PID<0.115.0>, #PID<0.107.0>]
],
:running,
#PID<0.115.0>,
[],
[
header: ~c"Status for generic server Elixir.SecurityTokenManager",
data: [
{~c"Status", :running},
{~c"Parent", #PID<0.115.0>},
{~c"Logged events", []}
],
data: [
{~c"State",
%SecurityTokenManager{
access_key: "my-access-key",
secret_access: "my-secret-access",
security_token: "ZjhkWWzemgvCMZXwIit+a/00FHw=",
expires_at: ~U[2023-11-26 15:05:49.494781Z]
}}
]
]
]}

This could lead to unauthorized access if the leaked information falls into the wrong hands. Therefore, it’s crucial to ensure that sensitive data stored in a GenServer’s state is adequately protected.

Implementing the format_status/2 callback

The format_status/2 callback provides a way to protect sensitive data in GenServers. This callback is used to provide a custom representation of the GenServer’s state when debugging or introspecting the process.

By default, the format_status/2 callback returns all the state data. To protect sensitive data, we can implement this callback to filter out or obfuscate the sensitive parts of the state.

Here’s how we can implement the format_status/2 callback in our GenServer:

def format_status(_reason, [pdict, state]) do
{:ok,
[
pdict,
%{
state
| access_key: "<sensitive_data>",
secret_access: "<sensitive_data>",
security_token: "<sensitive_data>"
}
]}
end

So, when the :sys.get_status/1 is called we'll have a response that does not display any sensitive data.

iex(4)> :sys.get_status(pid)
{:status, #PID<0.116.0>, {:module, :gen_server},
[
[
"$initial_call": {SecurityTokenManager, :init, 1},
"$ancestors": [#PID<0.115.0>, #PID<0.107.0>]
],
:running,
#PID<0.115.0>,
[],
[
header: ~c"Status for generic server Elixir.SecurityTokenManager",
data: [
{~c"Status", :running},
{~c"Parent", #PID<0.115.0>},
{~c"Logged events", []}
],
ok: [
[
"$initial_call": {SecurityTokenManager, :init, 1},
"$ancestors": [#PID<0.115.0>, #PID<0.107.0>]
],
%SecurityTokenManager{
access_key: "<sensitive_data>",
secret_access: "<sensitive_data>",
security_token: "<sensitive_data>",
expires_at: ~U[2023-11-26 16:59:47.764327Z]
}
]
]
]}

This is certainly an improvement, isn’t it? However, one concern that arises is that sensitive data can still be accessed via the :sys.get_state/1 function, even with the implementation of format_status/2.

iex(5)> :sys.get_state(pid)
%SecurityTokenManager{
access_key: "my-access-key",
secret_access: "my-secret-access",
security_token: "fZWO+Dym+bEJ9kw8E1nLNryT5m0=",
expires_at: ~U[2023-11-26 17:16:47.936304Z]
}

The next section will delve into how to prevent this issue.

Implementing or deriving the Inspect protocol for structs

The Inspect protocol controls how data structures are converted to strings for printing. By default, when a struct is printed, all of its data is exposed. So, again this can lead to sensitive data being accidentally logged or displayed. To prevent this, we can implement the Inspect protocol for our struct to control how it is printed.

defimpl Inspect, for: SecurityTokenManager do
def inspect(%SecurityTokenManager{} = state, opts) do
Inspect.Map.inspect(
%{
access_key: "<redacted>",
secret_access: "<redacted>",
security_token: "<redacted>",
expires_at: state.expires_at
},
opts
)
end
end

With the implementation of the Inspect protocol now established, we can achieve the same structured output for both :sys.get_state/1 and :sys.get_status/1 functions.

iex(6)> :sys.get_state(pid)
%{
access_key: "<redacted>",
secret_access: "<redacted>",
security_token: "<redacted>",
expires_at: ~U[2023-11-26 21:37:53.396092Z]
}
iex(7)> :sys.get_status(pid)
{:status, #PID<0.119.0>, {:module, :gen_server},
[
[
"$initial_call": {SecurityTokenManager, :init, 1},
"$ancestors": [#PID<0.118.0>, #PID<0.110.0>]
],
:running,
#PID<0.118.0>,
[],
[
header: ~c"Status for generic server Elixir.SecurityTokenManager",
data: [
{~c"Status", :running},
{~c"Parent", #PID<0.118.0>},
{~c"Logged events", []}
],
ok: [
[
"$initial_call": {SecurityTokenManager, :init, 1},
"$ancestors": [#PID<0.118.0>, #PID<0.110.0>]
],
%{
access_key: "<redacted>",
secret_access: "<redacted>",
security_token: "<redacted>",
expires_at: ~U[2023-11-26 21:37:53.396092Z]
}
]
]
]}

As stated in the subtitle, an alternative method involves deriving the Inspect protocol. The :only and :except options can be utilized with @derive to determine which fields should be displayed and which should not. For simplicity, we’ll use the :only option in this instance.

@derive {Inspect, only: [:expires_at]}
defstruct [:access_key, :secret_access, :security_token, :expires_at]

In this method, only the :expires_at will be visible. The rest of the fields will not just have their values hidden, but their keys will be completely omitted as well.

iex(8)> :sys.get_state(pid)
#SecurityTokenManager<expires_at: ~U[2023-11-26 22:42:56.998354Z], ...>
iex(9)> :sys.get_status(pid)
{:status, #PID<0.119.0>, {:module, :gen_server},
[
[
"$initial_call": {SecurityTokenManager, :init, 1},
"$ancestors": [#PID<0.118.0>, #PID<0.110.0>]
],
:running,
#PID<0.118.0>,
[],
[
header: ~c"Status for generic server Elixir.SecurityTokenManager",
data: [
{~c"Status", :running},
{~c"Parent", #PID<0.118.0>},
{~c"Logged events", []}
],
data: [
{~c"State",
#SecurityTokenManager<expires_at: ~U[2023-11-26 22:43:57.000550Z], ...>}
]
]
]}

Conclusion

This blog post has explored some techniques to protect sensitive data in Elixir GenServers. It has shown how to implement or derive the Inspect protocol for structs, and how to implement the format_status/2 callback for GenServer, :gen_event or :gen_statem processes holding sensitive data. These techniques can help prevent or limit the exposure of sensitive data in logs, error reports, or terminal outputs, which can compromise the security and privacy of the application and its users.

I hope you have found this useful and informative, and I encourage you to try these techniques in your own projects. If you have any questions or feedback, please feel free to leave a comment below.

References


Protecting sensitive data in Elixir GenServers was originally published in New Work Development on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[One or multiple packages?]]> https://tech.new-work.se/one-or-multiple-packages-ce24a59c653b?source=rss----35cb8c78d3cf---4 https://medium.com/p/ce24a59c653b Wed, 08 Nov 2023 10:27:50 GMT 2023-11-08T10:27:50.011Z Well…it depends
Photo by Kelly Sikkema on Unsplash

Swift Package Manager is not just a dependency manager, it is also a tool that allows you to organize your project into modules and create a package structure with which organize your frameworks and dependencies.

If CocoaPods has always been your dependency manager, you should know that all modules and third party frameworks are listed in the Podfile, so any framework can access to any module or third-party framework by the podspec. But, with SPM, we can control this access and create a package structure that “defines” access rules to the shared code.

Normally, a modular project has different kinds of modules depending on their purpose, and we might organize them into the following categories:

  • Dependencies (third-party frameworks)
  • Feature frameworks
  • Services frameworks (APIClient, tracker, formatters, etc)
  • Testing frameworks
  • UI frameworks

Depending on the project you can organize it in a multiple ways, but in this article I will explain the following one:

  • Testing frameworks should just depend on the system tools and maybe in some third-party frameworks (like a snapshot test framework), but they never can depend on features, services or UI code.
  • UI frameworks should depend on our testing frameworks and maybe some third-party frameworks (like animation frameworks), but they never can depend on our core code.
  • Services frameworks can need third-party frameworks but we should try to inject them instead of adding them as dependencies. These frameworks should depend on our testing frameworks, but they can never depend on the feature code.
  • Feature frameworks will need all our frameworks and they can also need some third-party frameworks, but they should always be injected.
  • Third-party frameworks will be added directly to the main target, where we can configure and inject them into the needed frameworks.
Packages graph

Third party frameworks

Let’s create an empty package file for all the third-party frameworks.

First of all is add all the third-party frameworks in the dependencies property. Now we need a target where link all the dependencies so we create one with an empty file to make Xcode compile and resolve the dependencies. Once we have the target, we can create the product and link it into the main target.

We have created a wrapper where all the third-party dependencies will be located. This way, we can control everything: dependency version, whether the dependency should be compiled (maybe depending on the environment), etc.

Dependencies package

But there are also other ways to solve this:

  • You can add the third-party frameworks directly into the main target without a package file.
  • You can create a package file and one product and target per dependency, but the dependencies list of the main target can be huge.

Once the project has access to the dependencies, we can configure and inject them into the frameworks that are needed.

Internal modules

Now it’s time for our frameworks. We will create four packages as we mentioned before: Features, Services, Testing, and UI. Each of them will have a product that contains all the targets in the file, so we add just one product into the dependencies of the main target.

Folder structure and project linking

The main idea is that any feature framework should not be a dependency of any of the rest of our frameworks. So, if we connect the packages as we defined in the packages graph image, we ensure that nothing depends on the feature frameworks but the main target and avoid cyclic dependencies.

Packages definitions

Package modifier

In a package, we can define as many products and targets as we want, but the relation between them is what we should control to avoid unnecessary dependencies.

Swift 5.9 gives us a new access modifier called “package”. It allows access to the code to just the targets defined in the package file.

Conclusion

Organizing the code into modules and these into different packages with a dependency order established by you is a good practice because you can control the dependencies of your modules, favoring DI, testing, and scalability.


One or multiple packages? was originally published in New Work Development on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[The Role of Artificial Intelligence in Recruitment]]> https://tech.new-work.se/the-role-of-artificial-intelligence-in-recruitment-c8cba6478ba3?source=rss----35cb8c78d3cf---4 https://medium.com/p/c8cba6478ba3 Thu, 07 Sep 2023 15:01:02 GMT 2023-09-07T15:01:02.706Z Imagine you can have a workmate capable of interacting right away with candidates, screening CVs in the blink of an eye with flawless impartiality, and automatically scheduling interviews at a remarkable speed, while you are focused on bringing value to your processes.

Are you thinking about a human? Here’s the twist — may not necessarily be. This is where Artificial Intelligence (AI) comes into play!

Technology has evolved at a mind-blowing pace and transformed our lives.
AI has become a hot topic and different sectors have been incorporating it into their ways of working, recognizing it as a powerful ally to take their businesses to the next level and gain competitive advantage. Therefore, it’s not surprising that AI has (also) been paving its way into the recruitment and selection field.

In a generalist way, AI allows Human Resources (HR) professionals to streamline the recruiting workflow through the automation of manual, repetitive, high-volume, and time-consuming tasks. This way it’s possible to provide a personalized, timesaving, data-driven, and fair approach, both for stakeholders and candidates.

Nowadays, there are several practical applications of AI in the recruitment and selection processes.

One of the most common uses of AI in recruitment is in the screening phase. AI-powered tools can screen large volumes of CVs, using keywords and data to match candidates’ experience and skills to job descriptions. By eliminating demographics from the analysis, it reduces biases and therefore contributes to building a more diverse and inclusive culture.
Moreover, AI can also generate a predictive analysis of how likely it is that the candidate will be successful in a specific role.

The worth of AI goes beyond the initial stages of the hiring process and can be present throughout the entire candidate’s journey. Namely, chatbots are widely used for recruitment purposes and have the potential to improve the candidate experience, by responding in real-time to candidates, immediately informing them about updates to the process and next steps, and even automatically coordinating the scheduling of interviews through calendar integrations.

But despite being a great innovation, some challenges arise from the use of AI in recruitment. Because AI depends on data, ensuring its quality is essential to maintain the reliability and validity of the outcomes. Additionally, we must not overlook the associated ethical and legal aspects, which in fact have been increasingly reflected in labor legislation updates. This way, HR professionals must bear in mind the importance of acting transparently and fairly when employing these tools.

Lastly, does this mean that AI will take the place of HR professionals?

It is possible to envisage that AI will continue to be preponderant in the years ahead and shape the recruitment landscape.

However, human processes will remain indispensable, as AI must assume a supporting role, with the ultimate goal of improving job performance through automation. By introducing AI into the background, HR professionals can free up their schedules to invest in transversal projects, provide an outstanding candidate experience, and define proactive hiring strategies and KPIs, while partnering with Hiring Managers to reduce the Time to Hire, the Cost per Hire, and improve the Quality of Hire.

The Role of Artificial Intelligence in Recruitment was originally published in New Work Development on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Shortage of technological talent in Portugal: what are the challenges?]]> https://tech.new-work.se/shortage-of-technological-talent-in-portugal-what-are-the-challenges-c9de1c4df5e9?source=rss----35cb8c78d3cf---4 https://medium.com/p/c9de1c4df5e9 Wed, 30 Aug 2023 14:17:09 GMT 2023-08-30T14:37:08.195Z It is important to have a collaboration between the government, the companies and the educational institutions. Therefore, investments should be made in training professionals and in updating educational programs.

In recent years, humanity has witnessed a technological transformation like never before. This transformation caused the way we live, work and communicate to change significantly. Currently, these changes are no less visible, namely due to the increasing integration of artificial intelligence in organizations. On the other hand, it has been an increasing shortage of talent in the technological area. So, what are the challenges arising from this lack of talents?

What is known is that, during this period, the technology sector has been in constant growth and, therefore, the need for qualified professionals has also increased, not only in Portugal, but all over Europe. However, finding talent has been a challenging task for many technology companies. According to the Talent Shortage Survey 2023, by ManpowerGroup, globally Portugal was considered the 4th country where is most difficult to hire. IT and Data were identified as the most sought functions by Portuguese companies.

One of the main challenges contributing to this talent shortage is the rapid obsolescence of knowledge, due to the speed of technological evolution, which makes it more difficult for professionals to keep up to date. In addition, there is still a certain discrepancy between the labor market and the curriculum programs, because things change so quickly that schools/universities may find it more challenging to adapt accordingly.

In order to overcome this difficulty, companies have chosen to implement upskilling or reskilling programs so that their workers can develop the necessary skills to respond to these technological changes that are increasing significantly and quicker than ever.

Another challenge that Portuguese companies face concerns globalization. Today, more than ever, talent is in constant motion: either through emigration to more financially attractive countries, or through remote work in Portugal for foreign companies that offer more competitive salaries. One of the strategies that companies have used to deal with this movement of talent has been, and continues to be, offering greater flexibility of working hours to their workers, but also giving them the possibility to work part-time and/or on a remote/hybrid mode.

Due to this ease of the talents being in constant motion, salary expectations, as a consequence, prove to be another challenge that companies face today. Data from the study “Scarcity of Talent in Portugal”, by Michael Page, corroborates exactly this, with 42% of recruiters stating that, when recruiting, salary expectations are really the main obstacle. In this sense, in addition to flexibility, it is important for the companies to be able to develop ways to attract, not only with the salary itself, but also with the emotional salary that has been increasingly valued.

In a highly competitive sector, it is important to have a collaboration between government, companies and educational institutions. It is therefore necessary to invest in the training of professionals, update educational programs and promote policies that encourage innovation. Only this way it will be possible to develop solutions and apply efficient measures to overcome the shortage of talent and start investing even more in attracting and retaining talent in Portugal.


Shortage of technological talent in Portugal: what are the challenges? was originally published in New Work Development on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Resiliency patterns for cloud-based applications — Part I]]> https://tech.new-work.se/resiliency-patterns-for-cloud-based-applications-part-i-2bf74873bace?source=rss----35cb8c78d3cf---4 https://medium.com/p/2bf74873bace Fri, 28 Jul 2023 08:29:13 GMT 2022-02-28T16:43:12.211Z Resiliency patterns for cloud-based applications — Part I

Aline Souza — Backend Engineer

People expect to be able to use their applications anytime they want to. To accomplish this, engineering teams need to keep resiliency in mind while building their applications. Resilience is the ability of a system to manage and graciously recover from failures. Resiliency patterns aim to ensure the applications are available whenever the users need them.

You need to face a lot of challenges when developing and designing cloud applications. Throughout this article, we are going to walk through some resiliency patterns you may want to consider when building a cloud-based application, to keep it up and running.

Parallel Availability

Resilience can be estimated in terms of a system’s availability at any given time. System availability is determined by the availability of all of its components. These components can be linked in serial or parallel connections [1].

In a serial connection, if one of the components fails the entire system fails. For instance, if a system consists of two components operating in series, a failure on either component leads to a system failure [1], [2].

In a parallel connection, if you have two parallel components and one of them fails, the system keeps running without failure (or at least it should). For example, if a system comprises two components operating in parallel, a failure of a component leads to the other component taking over the operations of the failed component [1], [2].

A serial system is operational only if all its components are available. Hence, the availability of a serial system is a product of the availability of its components. For example, in a system with components X and Y, you multiply the availability of component X by the availability of the component Y. The following equation represents the availability of the system: A = Aₓ Aᵧ [2], [3].

Based on the above equation we concluded that the combined availability of two components in series is always lower than the availability of the individual components. The table below shows the availability and downtime for individual components and the combined system [2], [3], [4].

As we can see from the above table, even though a very high availability component Y was used, the low availability of component X pulls downs the overall availability of the system. As the saying goes, a chain is no stronger than its weakest link. In this case, the chain is weaker than its weakest link [2].

Now, assume that you have a system consisted of two components in parallel. This system is operational if either component is available. So, the combined availability is the result of the multiplication of the unavailabilities subtracted from 1. The following equation represents the combined availability of the system: A = 1 — (1 — Aₓ)² [2], [3].

That means that the combined availability of two components in parallel is always much higher than the availability of its individual components. The table below shows the availability and downtime for individual components and the parallel combinations [2], [3],[4], [5].

Looking at the above table, it is clear that even though a very low availability Component X was used, the overall availability of the system is much higher. In such a manner, availability in parallel provides a very powerful mechanism for making a more reliable and resilient system [2].

Multi-AZ Deployment

With the previous pattern, we learned that the duplication of the system’s components maximizes the system’s total availability. Within the cloud, this means deploying it over several availability zones (multi-AZ) and, in some situations, across multi-regions.

Availability zones (AZs) are unique physical locations within regions from which public cloud computing resources are hosted. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. Each region is a separate geographic area. The physical separation of AZs within a region protects applications and data from data center failures [6], [7].

If a system is built in a Multi-AZ architecture, it takes advantage of having zone-redundant services replicating the applications and data across AZs to protect from single-points-of-failure [6].

For example, if your system has a Multi-AZ database instance, there is a primary database instance that synchronously replicates the data to a standby instance in a different AZ. In case of an infrastructure failure, there will be an automatic failover to the standby instance, so that you can resume database operations as soon as the failover is complete. Since the endpoint for your database instance remains the same after a failover, your application can resume database operation without the need for manual intervention [8].

Therefore, Multi-AZ deployment increases the availability of a system and its tolerance to faults.

Stateless Services

As we have seen, if a component being called upon fails in your application, you can have a copy of that component ready to go. You can achieve that goal with stateless services.

A stateless app or service does not hold any data or state, so any copy of that service can serve the same function as the original. A stateless model ensures that any request or interaction with the service can be managed independently of the previous requests. This model facilitates auto scalability and recoverability, as new instances of the services can be dynamically created as the need arises or be restarted without losing data that’s required in order to handle any running processes or requests [9].

The widely used REST (REpresentational State Transfer) paradigm is a stateless model, and actually, this is one of the key considerations whether anything is RESTful or not. Roy Fielding’s original dissertation details the REST definition and says [10]:

“Each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client.”

While you might argue that using stateless services is not a resilience strategy per se, it is still an important and valid technique to improve the resilience of a system.

Asynchronous Decoupling

Although REST APIs are popular and useful in designing applications, REST APIs tend to be built with synchronous communications, where a response is required. A request from an end-user client can trigger a complex communication journey within your services architecture that can effectively introduce coupling between the services at runtime [11].

Asynchronous messaging is the most common decoupling technique. Take, for example, the need to send orders to Component X and Component Y generated on different external systems.

From a high-availability perspective, the loosely-coupled asynchronous approach enables Component X and Component Y to be unavailable as a result of a planned or unplanned outage, without affecting the external systems. The external systems can send the order creation request messages to a message queue [12].

On the other hand, if the communication is synchronous, Component X and Component Y must be available for the external system to create the request. The availability requirements of Component X and Component Y in this architecture must be the greatest of all the availability requirements of all tightly connected systems combined [12].

Under a higher load, your services will need to scale out to process the requests. You then have to consider the scale-out latency, as it takes a few moments from when an auto-scaling party triggers the creation of additional instances until they are ready for action. It takes time to initiate new container tasks too [11].

In a synchronous communication approach, if the scaling event happens late, you may be unable to handle all incoming requests with the available resources. Such requests can be lost or answered with HTTP status code 5xx [11].

In contrast, in an asynchronous communication approach, you can use message queues that buffer messages during a scaling event to help avoid this. This is the more robust architecture, even in use cases where the end-user client is waiting for an immediate response. When your infrastructure takes time to scale out, and you cannot process all requests in a timely manner, the requests will persist [11].

Prioritize traffic with queues

Although it may be easy to see the benefits of a queue asynchronously processing messages, the drawbacks of using a queue are subtle. With a queue-based system, during intervals of high traffic, messages can arrive faster than your services can process them. While in a case when processing stops but messages keep coming in, the message debt will grow into a huge backlog, pushing up the processing time [13], [14].

To put it another way, a queue-based system has two operating modes or bimodal behavior. The latency of the system is low when there’s no backlog in the queue, and the system is in steady mode. However, if a failure or a higher load causes the rate of arrival to surpass the processing limit, it easily flips into a more sinister mode of operation. The end-to-end latency in this mode increases exponentially, and it can take a lot of time to work through the backlog to get back into the steady mode [13].

Below are a few design techniques that can help you prevent long queue backlogs and recovery times:

  • In asynchronous systems, security is essential at every layer. In an asynchronous system, each part of the system needs to protect itself against overload, and prevent one workload from consuming an excessive share of resources. So, we protect them by implementing throttling and admission control [13].
  • Using multiple queues helps to control traffic. Often asynchronous systems are multitenant, performing work on behalf of a wide number of different customers. In certain aspects, there’s an incompatibility between a single queue and multitenancy. By the time that the work is queued up in a shared queue, isolating a customer workload from another is difficult [13].
  • Turn to LIFO behavior instead of FIFO when faced with a backlog. For most real-time systems is preferable to have fresh data processed immediately, when a backlog happens. Any data accumulated during an outage or spike can then be processed when there is capacity available [13].

In certain cases, it is too late to prioritize traffic after a backlog has built up in a queue. However, if processing the message is quite costly or time-consuming, being able to transfer messages into a separate queue can still be worthwhile. For example, during a spike, expensive messages can be transferred to a low priority queue. We can use the same approach to messages that meet certain age criteria, transferring them into a separate queue. The system works on low priority queue messages as soon as the resources are available [13], [15].

There are many strategies to make asynchronous systems resilient to workload changes, such as shuffle-sharding, dropping old messages (message time-to-live), heartbeating long-running messages, and so on. We are not going to cover all of them here, but you can take a look at the next section for further learning resources.

Find out more!

That’s all for today

Today we discussed five of the most popular resiliency patterns out there. You probably want to consider them when building your resilient cloud-based application.

I hope you enjoyed part I. In the next part, we’re going to discuss resiliency patterns related to databases. Keep tuned!

[1] Oggerino, C., High Availability Network Fundamentals. Cisco, 2001.

[2] System Reliability and Availability

[3] Building Global, Multi-Region Serverless Backends

[4] Uptime & downtime conversion cheat sheet

[5] Uptime & downtime tool

[6] Regions and Availability Zones in Azure

[7] Regions, Availability Zones, and Local Zones

[8] Amazon RDS Multi-AZ Deployments

[9] Patterns for scalable and resilient apps

[10] Roy Fielding’s Dissertation — Chapter 5: Representational State Transfer (REST)

[11] Understanding asynchronous messaging for microservices

[12] Asynchronous integration as a decoupling technique

[13] Avoiding insurmountable queue backlogs

[14] Cisco IOS QoS Solutions Configuration Guide, Release 12.2SR

[15] Priority Queue pattern


Resiliency patterns for cloud-based applications — Part I was originally published in New Work Development on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[The Office is the Natural Habitat of Culture]]> https://tech.new-work.se/the-office-is-the-natural-habitat-of-culture-50050f2688ed?source=rss----35cb8c78d3cf---4 https://medium.com/p/50050f2688ed Mon, 10 Jul 2023 15:23:22 GMT 2023-07-10T15:23:22.449Z After spending so much time working outside the office, many of us may prefer to continue working from home, wearing comfortable clothes, and having our pets by our side. On the other hand, many companies offer hybrid work options but would like to see employees back in the office more often. In this process, it is clear that many people resist the return. According to a McKinsey study, 29% of people say they would likely change jobs if companies were to require 100% in-person work again.

Source:Living Offices |S05E04| NEW WORK

Having more flexibility in where, when, and how we work is very positive as it contributes to our ability to spend more time with family and structure our days to better accommodate morning workouts or lunchtime walks. But returning to the office brings some significant benefits for employees as well, not just for companies. Let’s go through some of them:

Connection
One of the most important benefits is the possibility of connection
. The value of “water cooler conversation” is well-documented, as these are the perfect opportunity for colleagues to exchange stories and talk about their experiences in person. Whether it’s a work-related topic or sharing a personal problem, this creates social cohesion, important connections among colleagues, and helps build a stronger working culture.

Collaboration
When we think about day-to-day interactions, many of the best instant and spontaneous collaborations tend to happen when we meet each other face-to-face. It’s about connecting with other people who do what we do. While we can connect with people via Zoom or phone, the creative spark happens more naturally when people are together. Additionally, office activities like team-building events, where people see each other in person and interact as a group, stimulate team spirit, create a social buzz, and help transform professional relationships into personal ones.

Health and Well-Being
Whether we are introverted or extroverted, we need to connect with other people. It is evident that we tend to spend more time with people based on our preferences, but research shows that if we don’t spend adequate face-to-face time with others, we will experience decline in well-being, increased illness, and reduced life expectancy.

Technology helps us to stay in touch, but it is inadequate as it does not allow us to read non-verbal cues as well as in-person interactions. Moreover, we are limited by delays, technical glitches, and that inconvenient mute button. Being in front of the camera can also make us hyper-aware (who wants to spend so much time looking at themselves on camera?), creating an intensity in the workday that can be tiring. Being together in the office can reduce technology fatigue and is crucial for the physical and emotional health of everyone.

Our brains also benefit when we are face-to-face. It promotes the release of oxytocin, which is the feel-good chemical in our brains. In addition to providing doses of contentment, it also reduces brain chemicals like cortisol and adrenocorticotropic hormone, which in high amounts are associated with high blood pressure, weight gain, and heart disease.

Relationship Building
Being physically present also helps in building relationships. Familiarity and regular in-person contact tend to increase acceptance and trust. This is because we are more likely to have more information about people — what they are going through, what motivates them, and how they operate — and it is more likely that we will understand them, empathise with them, and feel more comfortable talking to them. And with this openness, trust tends to naturally increase.

Enhance our teaching and learning skills
We have a lot to teach our teammates, regardless of the seniority we have in the company. Being in the office allows people to learn from each other, which increases overall satisfaction. Being together enables us to support each other — if we know when the other person may be feeling down or struggling with a work problem. Our presence is important because others rely on us and trust us, and we can rely on them as well.

Besides, sociologically, the most important way people learn is by observing others. Even unconsciously, we are always observing and modeling the behavior of others. Teammates need our energy, our sense of humor, and our unique talents. Contributing to the community helps us feel fulfilled because it reminds us of our value and validates all the ways the group can benefit from our talents.

In conclusion, today the challenge will be undoubtedly to have a shared purpose and attract people to the offices, as there is a unique experience when we come together and add value in exchanging ideas. Technology has played a crucial role in evolving the way we work in recent years, and it is expected to be a trend that can grow even further.

Therefore, we should take advantage of all the potential that technology has to offer while empowering talent. Despite we know that technically the work can be done, in many cases, remotely or in a hybrid manner, we should not underestimate the value of human contact experience.


The Office is the Natural Habitat of Culture was originally published in New Work Development on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[The importance of accessibility on your iOS app]]> https://tech.new-work.se/the-importance-of-accessibility-on-your-ios-app-daed7f566a01?source=rss----35cb8c78d3cf---4 https://medium.com/p/daed7f566a01 Mon, 26 Jun 2023 07:13:57 GMT 2023-06-28T09:59:57.645Z The importance of accessibility for your iOS app

Leave no one behind

Accessibility on apps is the practice of designing apps that are usable by people with different abilities and preferences.

As an iOS developer, it’s important to create apps that are accessible to everyone, regardless of their abilities. Besides being the right thing to do, it also expands your user base and make your app more appealing to a wider audience.

This is not a code or tech article; it’s intended as a nudge to start identifying any accessibility issues your app might have. I’ve also included a bunch of tools to help you with that, and I recommend you make a list of points you find interesting so you can start exploring and implementing them in your own lovely app.

Let’s get started! 🚀

Accessibility icon surrounded by other iOS icons.

Identify colour contrast issues 🎨

One common accessibility issue is a lack of contrast between text and background colours. This can make it difficult for users with low vision to read content in your app. To fix this issue, you can review your app’s color contrast and make sure it meets accessibility standards.

There are several online tools you can use to check your app’s colour contrast, such as the WebAIM Contrast Checker. To use the tool, enter the foreground and background colors for your UI elements and it will provide you with a contrast ratio. Ideally, your app’s contrast ratio should be at least 4.5:1 for normal text and 3:1 for large text.

This images shows the difference between a text with high contrast and one with low contrast. Shows light and dark mode variants.
Image from designcode.io

Adjust font sizes 📏

While some users might have perfect vision, others may rely on assistive technologies or have specific visual impairments. In such cases, the ability to adjust the font size is key to a comfortable and enjoyable user experience. This is where Dynamic Text comes into play. By offering the flexibility to increase or decrease the font size, Dynamic Text lets users with low vision or other visual impairments to customise their reading experience to suit their individual needs.

A user who can effortlessly read and navigate your app is more likely to engage with its content and features, in turn to increase user satisfaction and retention.

Implementing Dynamic Text in an iOS application is relatively straightforward. Apple’s developer guidelines provide comprehensive documentation on integrating Dynamic Text throughout the app’s interface, ensuring that text elements automatically adjust their size based on the user’s preferred settings.

If you use SwiftUI you can easily preview how your app will look in different scenarios. At the bottom of your preview, select Variants > Dynamic Type variants to see something like this:

SwiftUI preview with Dynamic Type variants enabled.

Provide clear feedback 👀

In addition to visual impairments, it’s essential to consider the needs of users with cognitive disabilities when designing and developing your iOS app. These individuals may face challenges related to memory, attention, and comprehension, making it crucial to provide clear and effective feedback within your app.

This includes visual cues, such as displaying success messages or highlighting completed tasks, as well as auditory feedback through sounds or voice prompts. We can also enable haptic feedback using the great API that Apple provide us. Using a combination of visual, auditory and haptic cues helps accommodate all users, ensuring that they can comprehend and process information more effectively.

Let’s say you have a button that leads to a page with more information about your product. Instead of just changing the button colour when the user taps on it, you can also add a short message that says ‘Loading…’ to indicate that the action is being processed.

As well as immediate feedback, you should also consider providing error messages that are clear, concise, and easily understandable.

Support VoiceOver 🦻

One of the most important accessibility features on iOS is the VoiceOver tool. VoiceOver is a screen reader that provides audio descriptions of on-screen content, allowing users with visual impairments to navigate.

With VoiceOver activated, users can navigate an app using swipe gestures and double-tapping to activate buttons or links. VoiceOver also provides audio feedback when an action is performed, such as a button press or text field input.

Here, you can get started by documenting how VoiceOver should work in your app. Define groups of elements that must be read together and set traits, labels, values and hints for each element that needs them.

Trait: Constants that describe how an accessibility element behaves.
Label: A string that succinctly identifies the accessibility element.
Value: A string that represents the current value of the accessibility element.
Hint: A string that briefly describes the result of performing an action on the accessibility element.

The graphic bellow is an example of how I defined how VoiceOver should work in the You section on XING, the app I work on at New Work SE. The numbers show the order that VoiceOver should follow when reading the content. The yellow notes are the default behaviour, while the green ones are a custom element we should create.

You section of XING’s app with notes with all the traits, values, hints and labels that VoiceOver should read.

There are a lot of different points to cover here, so I’ll include them in a future post. In the meantime, you can check this documentation from Apple to find out more about this.

Testing your app’s accessibility 🧐

The final step to improve your app’s accessibility is to test it thoroughly. Here are a few things to keep in mind:

👀 Test with assistive technologies: Use assistive technologies like the Accessibility Inspector app to test your app’s accessibility. To open it, just click on Xcode > Open Developer Tool > Accessibility Inspector.

The Inspector pointer button will allow you to point a specific section on the screen to see the accessibility traits and values of that element.

The Audit button is also really useful as it lists all the issues it finds in the current view of your app.

An screenshot from the Accessibility Inspector with arrows pointing what is each button.
Accessibility Inspector

🕵️ Test with real users: Get feedback from users with disabilities to make that your app is truly accessible to everyone.

📱 Test across devices: Make sure you test your app’s accessibility on different devices and iOS versions to ensure compatibility.

Leave no one behind 🧏

Improving the accessibility of your iOS app is both important and a great way to expand your user base and provide a more inclusive app. There are a lot more ways to make sure your app is accessible, but by following the tips and techniques I’ve outlined in this article, you’ve already got a great starting point to create a more accessible and user-friendly app. So go ahead and give it a try! 🦾

Thanks for reading my post. You’ll also find me over on Twitter: @dev_alhambra.

🙇🏻


The importance of accessibility on your iOS app was originally published in New Work Development on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Unit Testing Combine Publishers]]> https://tech.new-work.se/unit-testing-combine-publishers-6f581e30c370?source=rss----35cb8c78d3cf---4 https://medium.com/p/6f581e30c370 Fri, 19 May 2023 09:49:47 GMT 2023-05-19T09:49:47.671Z When testing Combine-related code we must keep in mind that we’re most probably testing asynchronous code. And how do we expect to test asynchronous code in Xcode? Right, with expectations.

Expectations in Xcode allow us to define specific conditions that must be met before a test case can be considered successful or failed. By using expectations, we can accurately validate the behaviour of Combine publishers and other asynchronous code.

In this article, we will analyse the various kinds of expectations available in Xcode and the code required to set them up. Finally, we’ll propose a new method that not only reduces the boilerplate code but also improves the readability of our tests.

Photo by Alexandru Yicol on Unsplash

Let’s start by looking at what kind of expectations are available in Xcode and which ones are best suited for testing Publishers:

XCTestExpectation

This is the basic type of expectation since it allows us to manually control its outcome by calling fulfill(). It can be used to wait until a publisher emits some specific value and we can easily apply this to any @Published property wrapper for that matter:

let expectation = XCTestExpectation(description: "Wait for the publisher to emit the expected value")
viewModel.$keywords.sink { _ in
} receiveValue: { value in
if value.contains("Cool") {
expectation.fulfill()
}
}
.store(in: &cancellables)

wait(for: [expectation], timeout: 1)

XCTKVOExpectation

This is an expectation that fulfils on a specific key-value observing (KVO) condition. But this one is deprecated and Apple promotes using the next one:

XCTKeyPathExpectation

This expectation allows waiting for changes to a property specified by key path for a given object. It can be used like this:

let loadedExpectation = XCTKeyPathExpectation(keyPath: \ViewModel.isLoaded, observedObject: viewModel, expectedValue: true)

While this cannot be applied directly to publishers we may think it can be used at least for testing @Published properties but it has some constraints:

  • The tested object must inherit from NSObject
  • The object properties we want to observe must be declared as @objc dynamic
  • The property types that we want to observe must also be representable in Objective-C.

Even if we adapt our code with all those requirements we would still be able to test only a specific value, not the stream of values from the publisher or their completion.

This is far from ideal so let’s look at the next one:

XCTNSPredicateExpectation

This is an expectation that is fulfilled when an NSPredicate is satisfied. It can be used as:

let predicateExpectation = XCTNSPredicateExpectation(predicate: NSPredicate { _,_ in
viewModel.isLoaded
}, object: .none)

But when using this you probably found out that, even if the tested code is fast, the expectation must be awaited for at least more than a second:

wait(for: [predicateExpectation], timeout: 1) // this fails
wait(for: [predicateExpectation], timeout: 1.5) // this succeeds

It turns out that XCTNSPredicateExpectation is slow because it uses some kind of polling mechanism to check the predicate periodically thus makes it best suited for UI tests. So it’s better to avoid it in unit tests if we want them to run as fast as possible.

XCTNSNotificationExpectation

Finally, there’s this expectation that is fulfilled when an expected NSNotification is received. It can be useful in some scenarios but again, not so much for testing Combine-based code.

Level-set the expectations

After looking at the options above it’s clear that we’re just left with the XCTestExpectation as the only choice for testing Publishers.

But writing tests this way always involves some boilerplate code, starting from the the expectation creation, subscribing to the publisher, manually fulfilling the expectation and finally storing the cancellable.

If we care about test readability in such a way they are easy to understand then let’s try to simplify this process by introducing our own custom expectation:

PublisherValueExpectation

We can create an XCTestExpectation subclass that does this repetitive task for us. Here, the PublisherValueExpectation is an expectation that will be fulfilled automatically when the publisher emits a value that matches a given condition.

public final class PublisherValueExpectation<P: Publisher>: XCTestExpectation {
private var cancellable: AnyCancellable?

public init(
_ publisher: P,
condition: @escaping (P.Output) -> Bool)
{
super.init(description: "Publisher expected to emit a value that matches the condition.")
cancellable = publisher.sink { _ in
} receiveValue: { [weak self] value in
if condition(value) {
self?.fulfill()
}
}
}

With this we can just write:

let publisherExpectation = PublisherValueExpectation(viewModel.$keywords) { $0.contains("Cool") }

We can also add a convenience initializer to simply pass an expected value if that conforms to Equatable:

public convenience init(
_ publisher: P,
expectedValue: P.Output
) where P.Output: Equatable
{
let description = "Publisher expected to emit the value '\(expectedValue)'"
self.init(publisher, condition: { $0 == expectedValue }, description: description)
}

And this allows us writing an even more compact expectation that reads nicely:

let publisherExpectation = PublisherValueExpectation(viewModel.$isLoaded, expectedValue: true)

Thanks to Combine we can adapt the tested publisher to check many things. For instance:

  • Expect many values to be emitted by the publisher
let publisherExpectation = PublisherValueExpectation(publisher.collect(3), expectedValue: [1,2,3])
  • Expect the first or last value being emitted by the publisher
let publisherExpectation = PublisherValueExpectation(publisher.first(), expectedValue: 1)
let publisherExpectation = PublisherValueExpectation(publisher.last(), expectedValue: 5)

Keep up with the expectations

The full project can be found in GitHub. It also includes two more useful expectations:

  • PublisherFinishedExpectation: Wait for a publisher to complete successfully (optionally after emitting a certain value or condition)
let publisherExpectation = PublisherFinishedExpectation(publisher, expectedValue: 2)
  • PublisherFailureExpectation: Wait for a publisher to complete with a failure (optionally with an expected error)
let publisherExpectation = PublisherFailureExpectation(publisher, expectedError: ApiError(code: 100))

Conclusion

There are many alternatives to do assertions on publishers but this is a way that is familiar to anyone that already uses test expectations in Xcode and can be easily adopted to existing tests.

Thanks for reading.


Unit Testing Combine Publishers was originally published in New Work Development on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[How to use Swift Package Manager products from Cocoapods]]> https://tech.new-work.se/how-to-use-swift-package-manager-products-from-cocoapods-96f225a12a20?source=rss----35cb8c78d3cf---4 https://medium.com/p/96f225a12a20 Fri, 19 May 2023 09:48:48 GMT 2023-05-19T09:48:48.814Z Roughly one year ago, New Work SE’s iOS Platform Team started a proof of concept about how we could move from our existing XING iOS App, based in over one hundred Cocoapods internal libraries and other 15 external dependencies, into a clean and modern project based only in SwiftPM packages.

If you’re interested, I explained how we simplified the previous graph dependency in a previous article here in order to tackle this project in an easier way.

How to control your iOS dependencies

Now, I want to share with you, how the process of migration to SwiftPM can be easier than expected if you are an iOS developer who works in a large iOS project with:

  • a Cocoapods setup with large number of Development Pods
  • a local Swift Package where you want to move these Pods as a targets
Photo by Michał Parzuchowski on Unsplash

Which was the problem we wanted to solve?

We started defining our migration plan, which was created by using our jungle tool in a Xcode Playground, to explore the dependency graph of the project and build the required steps for that migration in a way that only modules that were dependant of already migrated modules can be also migrated. Easy.

But then, only a few days after we started the migration process, we arrived to this kind of situation you see in the image below where only one or a reduced number of modules were used in our still not migrated Pods.

Can we use our already migrated modules in the SPM Package in our still active Pods?

We asked ourselves, “Is there a way to start consuming these new migrated modules into our legacy Cocoapods Modules setup without having duplicated modules in both package managers?”

Also, we wanted to start having results as soon as possible in the final iOS App. Integrating some of these migrated libraries in the final binary would allow us to start monitoring the behaviour in our APM solution and enjoying the benefits without waiting for the complete migration and (maybe) having a hard switch to SPM process.

Our first approach was to use this pod_install hook that configures a Swift Package Manager dependency into an existing Xcode Project (the ones that Cocoapods creates for us). But then, this error was happening when we built the app:

How we solved this problem?

We didn’t want to move to SwiftPM and create a new dependency with the Xcodeproj library that is used by Cocoapods. So, why not using the new Xcodeproj (lowercased P) Swift version by Tuist to configure our Pod Xcode Project and have an easier way to properly configure these Xcode Projects.

We defined 2 differents commands for local and remote Swift packages:

import XcodeProj
import PathKit
import ArgumentParser

struct AddLocalPackageCommand: ParsableCommand {
static var configuration = CommandConfiguration(
commandName: "addLocal",
abstract: "Injects a local SPM Package"
)

@Option(help: "The Pod project directory.")
var projectPath: String

@Option(help: "The SwiftPM package directory.")
var spmPath: String

@Option(help: "The product from that package to be injected.")
var product: String

@Option(help: "The target to be configured with that dependency")
var targetName: String

func run() throws {
let projectPath = Path(projectPath)
let spmPath = Path(spmPath)
let xcodeproject = try XcodeProj(path: projectPath)
let pbxproj = xcodeproject.pbxproj
let project = pbxproj.projects.first
_ = try project?.addLocalSwiftPackage(path: spmPath, productName: product, targetName: targetName)
try xcodeproject.write(path: projectPath)}
}

struct AddRemotePackageCommand: ParsableCommand {
static var configuration = CommandConfiguration(
commandName: "addRemote",
abstract: "Injects a remote SPM Package"
)

@Option(help: "The Pod project directory.")
var projectPath: String

@Option(help: "The SwiftPM package URL.")
var spmURL: String

@Option(help: "The product from that package to be injected.")
var product: String

@Option(help: "The exact version to be used.")
var version: String

@Option(help: "The target to be configured with that dependency")
var targetName: String

func run() throws {
let projectPath = Path(projectPath)
let xcodeproject = try XcodeProj(path: projectPath)
let pbxproj = xcodeproject.pbxproj
let project = pbxproj.projects.first
_ = try project?.addSwiftPackage(repositoryURL: spmURL, productName: product, versionRequirement: .exact(version), targetName: targetName)
try xcodeproject.write(path: projectPath)}
}

This way, we could include this Swift CLI tool (we called XcodeSPMI by obvious reasons) in our Package and use it after the Pod installation.

import PackageDescription

let package = Package(
name: "libraries",
products: [
.library(name: "FeatureB", type: .dynamic, targets: ["FeatureB"]),
.executable(name: "XcodeSPMI", targets: ["XcodeSPMI"]),
],
dependencies: [
.package(url: "https://github.com/tuist/XcodeProj.git", .upToNextMajor(from: "8.9.0")),
.package(url: "https://github.com/apple/swift-argument-parser", from: "1.2.2"),
],
targets: [
.target(name: "FeatureB", path: "FeatureB"),
.executableTarget(
name: "XcodeSPMI",
dependencies: [
.product(name: "XcodeProj", package: "XcodeProj"),
.product(name: "ArgumentParser", package: "swift-argument-parser")
])

]
)

Once we removed the dependency of FeatureB in the FeatureA’s .podspec file, this is how we inject (as a post_integrate step in the Podfile) the SPM dependency in the Cocoapods project using the previous .executable product from our Swift Package:

post_integrate do |installer|

# FeatureB
featureB_dependant = ["FeatureA"]

puts "Injecting FeatureB SPM framework into ..."
featureB_dependant.each do |project|
puts " #{project}"
`swift run --package-path libraries XcodeSPMI addLocal --project-path Pods/#{project}.xcodeproj --spm-path ../libraries/ --product FeatureB --target-name #{project}`
end
end

Is this enough? It’s for .binaryTargets but not for regular .targets as the one you can see in the example (FeatureB).

For .binaryTargets, which is the current solution we’re using for these shared modules, we are creating the .xcframework artifacts by using swift-create-xcframework.

In order to been able to remove the No such module ‘FeatureB’ error from your build log for plain .targets, there is an extra step needed during the module step. Looking at the logs we found something was not completely provided to the swift-frontend tool. The missing part was this variable you can find in the Build Setting documentation from Apple (SWIFT_INCLUDE_PATHS) which should also contain the directory where other modules can be found during that building stage.

Import Paths
Setting name: SWIFT_INCLUDE_PATHS
A list of paths to be searched by the Swift compiler for additional Swift modules.

As we can change the build settings for our development pods, that’s what we need to include in our .podspec file:

Pod::Spec.new do |s|
s.name = 'FeatureA'
s.version = '1.0.0'
s.author = 'Oswaldo Rubio'
s.license = 'commercial'
s.homepage = "https://github.com/osrufung/UsingSPMFromCPDemo"
s.source = { git: 'https://github.com/osrufung/UsingSPMFromCPDemo' }
s.summary = "#{s.name} – Version #{s.version}"
s.ios.deployment_target = '15.0'
s.swift_version = '5.7'
s.source_files = "Sources/**/*.swift"
# This resolves the missing SWIFT_INCLUDE_PATHS variable
s.pod_target_xcconfig = { 'SWIFT_INCLUDE_PATHS' => '$(inherited) ${PODS_BUILD_DIR}/$(CONFIGURATION)$(EFFECTIVE_PLATFORM_NAME)'}
end

Conclusion

Having this SPM injection into Cocoapods projects allow us to increase the number of Integrated modules (modules that are being linked with the final app) before finishing migrate the pending modules.

We’re having multiple benefits of this “hack”:

  • being able to remove from Cocoapods modules that are already migrated in SPM.
  • also injecting the dependency as a .binaryTarget in .xcframework format reduces the total build time spent locally and time resources and credits in the CI side.
  • Foundational modules shared in both package managers without any kind of library duplication.

We expect to finish this migration process during this year. This is the current migration state right now and I hope to share my thoughts about the whole migration project once we finished with you.

I mentioned before we had over one hundred internal modules in our Podfile, and that’s how the migration process looks today (mid of May 2023). Some of the modules have been deleted (because some features were removed), and other ones have been migrated but still integration in the app is still not possible because some of the modules that are dependant on them are still in the pending to be migrated list.

Visual representation with percentage of migrated, integrated and pending to migrate modules
current SPM migration status in our XING iOS project

If you want to play yourself, the issue can be reproduced and is already solved in this demo project along with a really tiny SwiftPM injector .executableTarget based on XcodeProj. I hope this can be useful for your projects and please share your thoughts or problems in the comments or in the github repository.


How to use Swift Package Manager products from Cocoapods was originally published in New Work Development on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Seniority Balance in Software Engineering Teams]]> https://tech.new-work.se/seniority-balance-in-software-engineering-teams-6973db8e1b38?source=rss----35cb8c78d3cf---4 https://medium.com/p/6973db8e1b38 Fri, 17 Mar 2023 14:02:37 GMT 2023-03-17T14:02:37.522Z In Software Engineering teams, seniority balance plays a crucial role in achieving delivery capacity, mentoring opportunities and capability, employee retention, costs, and sustainability. A team with a good balance of seniority levels can provide several advantages, including a diverse range of skill sets, perspectives, and experience levels.

New Work, Portugal —Software Engineering Team

Delivery Capacity

Seniority balance is essential for the delivery capacity of Software Engineering teams. A team composed of only junior engineers may not have the necessary experience to handle complex tasks or to make significant decisions. On the other hand, a team composed of only senior engineers may lack a new perspective on topics and out of the box thinking to come up with innovative solutions to problems.

By having a mix of experience, a team can have the necessary expertise to handle complex tasks, while also having the energy and creativity to come up with new ideas. This balance can help ensure that projects are completed on time and within budget.

Mentoring Opportunities and Capability

Seniority balance also provides opportunities for mentoring and professional development. Senior engineers can serve as mentors to junior engineers, passing on their knowledge and expertise. This mentoring can help junior engineers learn new skills, gain confidence, and grow in their careers.

Additionally, senior engineers can learn from junior engineers as well and can pass on knowledge and experience, as it was once done with them. Junior engineers may have new ideas or fresh perspectives that can help senior engineers think outside the box and come up with innovative solutions.

Employee Retention

A team with a good seniority balance can also help with employee retention. Junior engineers may leave if they feel they are not being challenged or if they do not have opportunities for growth. However, if they have senior engineers as mentors, they may be more likely to stay with the company and grow within the team.

Senior engineers may also leave if they feel that they are not being challenged or if they do not have opportunities to mentor junior engineers. By having a mix of both senior and junior engineers, a team can provide opportunities for both groups to grow and develop within the team.

Costs and Sustainability

Finally, seniority balance can also impact costs and sustainability. Senior engineers may command higher salaries, which can impact the overall cost of the team. However, if they are mentoring junior engineers, those junior engineers can take on more responsibilities over time, potentially reducing the need for additional senior engineers in the future.

Additionally, if a team is composed solely of senior engineers, there may be a risk of knowledge loss if those engineers leave the company. By having a mix of both senior and junior engineers, the team can ensure that knowledge is transferred and retained within the team, providing long-term sustainability.

Scenarios and their impact

There is a shortage of published studies about seniority structures and their impact on organizations. Most articles tend to focus on only one dimension, like delivery capacity, mentorship capacity or structure sustainability. Here are some considerations for three different scenarios:

1. 50% seniors, 40% mids, and 10% juniors

This scenario has a high percentage of senior engineers, which can provide a wealth of experience and knowledge to the team. Senior engineers can mentor mid-level and junior engineers, providing opportunities for professional development and knowledge transfer. However, having a smaller percentage of junior engineers may not only limit the team’s ability to bring in fresh perspectives and new ideas (which can impact innovation and creativity) but also sustainability of the team, as senior engineers are often more prone to leave.

2. 20% seniors, 30% mids, and 50% juniors

This scenario has a higher percentage of junior engineers, which can provide opportunities for fresh perspectives and new ideas. However, having a lower percentage of senior engineers may limit the team’s ability to handle complex tasks and make significant decisions. Additionally, without enough mid-level engineers, there may not be enough people with the necessary experience to mentor junior engineers effectively.

3. 30% seniors, 40% mids, and 30% juniors

This scenario has a more balanced mix of senior, mid-level, and junior engineers. This balance can provide opportunities for mentoring, professional development, and knowledge transfer, while also allowing for fresh perspectives and new ideas. Having a mix of experience levels can also provide the team with the necessary expertise to handle complex tasks and make significant decisions.

Conclusion

In conclusion, seniority balance is critical for Software Engineering teams. It can impact delivery capacity, mentoring opportunities and capability, employee retention, costs, and sustainability. A team with a good mix of senior and junior engineers can provide a diverse range of skill sets, perspectives, and experience levels, which can lead to more innovative solutions and better overall performance. Therefore, it is important for companies to consider seniority balance when building and managing Software Engineering teams.


Seniority Balance in Software Engineering Teams was originally published in New Work Development on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>