Reducing User Perceived Latency with a Proactive Prefetching Middleware for Mobile SOA Access

Reducing User Perceived Latency with a Proactive Prefetching Middleware for Mobile SOA Access

Daniel Schreiber, Andreas Göb, Erwin Aitenbichler, Max Mühlhäuser
Copyright: © 2011 |Pages: 18
DOI: 10.4018/jwsr.2011010104
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Network latency is one of the most critical factors for the usability of mobile SOA applications. This paper introduces prefetching and caching enhancements for an existing SOA framework for mobile applications to reduce the user perceived latency. Latency reduction is achieved by proactively sending data to the mobile device that could most likely be requested at a later time. This additional data is piggybacked onto responses to actual requests and injected into a client side cache, so that it can be used without an additional connection. The prefetching is done automatically using a sequence prediction algorithm. The benefit of prefetching and caching enhancements were evaluated for different network settings and a reduction of user perceived latency of up to 31% was found in a typical scenario. In contrast to other prefetching solutions, our piggybacking approach also allows to significantly increase battery lifetime of the mobile device.
Article Preview
Top

Introduction

Employees spend more and more of their work time away from their desk, e.g., visiting customers, performing on-site maintenance, or similar activities. Therefore, access to enterprise information systems (EIS) on mobile devices becomes increasingly important.

Today, most EIS like customer relationship management (CRM) systems or enterprise resource planning (ERP) systems use a service oriented architecture (SOA). They use and compose various backend services to implement their functionality. Hence, frameworks like described in (Hamdi, Wu & Benharref, 2008; Natchetoi, Kaufmann, & Shapiro, 2008; Tergujeff, Haajanen, Leppanen & Toivonen, 2007; Wu, Gregoire, Mrass, Fung & Haslani, 2008) emerged, aiming to support mobile SOA access. These frameworks make use of several techniques to support the peculiarities of mobile scenarios. For example, they introduce a client-side cache to bridge temporal loss of network connectivity, or use data compression to avoid long download times in low-bandwidth mobile networks. In general, the scenario of mobile SOA access looks like shown in the lower part of Figure 1. A client accesses a proxy server via a low-bandwidth, high-latency connection, e.g., EDGE and the proxy accesses one or more backend servers via high-bandwidth, low-latency connections, e.g., corporate LAN.

Figure 1.

Mobile (lower part) and non-mobile (upper part) setup. In a mobile setup, the position of the low-bandwidth, high-latency link is the opposite compared to the non-mobile setup

jwsr.2011010104.f01

However, the usability of mobile SOA is still greatly affected by the high latency of mobile network connections. A typical use case for an EIS requires several roundtrips from the mobile device to the SOA infrastructure, as multiple services need to be accessed. Each roundtrip causes a relatively large overhead during connection setup (Xun, Liao, & Zhu, 2008). This leads to noticeable and disturbing delays in the UI (Pervilä & Kangasharju, 2008), reducing the usability of mobile SOA. Leaving the network connection open over long periods to avoid this overhead is not an option, as this consumes too much battery power, which is a scarce resource for mobile devices (Cao, 2002).

In this paper we present prefetching and caching enhancements for an existing framework for mobile SOA access (Hamdi, Wu & Benharref, 2008), reducing user perceived latency. User perceived latency is the latency measured at the UI level using a typical use case of mobile SOA access, as suggested in (Domenech, Pon, Sahuquillo & Gil, 2007).

Our enhancements do not decrease battery lifetime for the mobile device, as the prefetched data is sent along with legitimate response data. The prefetching is done at a proxy server that employs a sequence prediction algorithm to predict future requests. Using a sequence prediction algorithm for prefetching avoids the need to hand-craft prefetching functionality at application level. At the client, the prefetched data is stored in a cache. Thus, no communication overhead occurs, should the data be actually requested by the user at a later point in time.

Complete Article List

Search this Journal:
Reset
Volume 21: 1 Issue (2024)
Volume 20: 1 Issue (2023)
Volume 19: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 18: 4 Issues (2021)
Volume 17: 4 Issues (2020)
Volume 16: 4 Issues (2019)
Volume 15: 4 Issues (2018)
Volume 14: 4 Issues (2017)
Volume 13: 4 Issues (2016)
Volume 12: 4 Issues (2015)
Volume 11: 4 Issues (2014)
Volume 10: 4 Issues (2013)
Volume 9: 4 Issues (2012)
Volume 8: 4 Issues (2011)
Volume 7: 4 Issues (2010)
Volume 6: 4 Issues (2009)
Volume 5: 4 Issues (2008)
Volume 4: 4 Issues (2007)
Volume 3: 4 Issues (2006)
Volume 2: 4 Issues (2005)
Volume 1: 4 Issues (2004)
View Complete Journal Contents Listing