Practical design and implementation of recognition assisted dynamic surveillance system

Practical design and implementation of recognition assisted dynamic surveillance system

Computers and Electrical Engineering 37 (2011) 1182–1192 Contents lists available at ScienceDirect Computers and Electrical Engineering journal home...

1MB Sizes 3 Downloads 89 Views

Computers and Electrical Engineering 37 (2011) 1182–1192

Contents lists available at ScienceDirect

Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng

Practical design and implementation of recognition assisted dynamic surveillance system q Jenq-Shiou Leu ⇑, Hung-Jie Tzeng, Chi-Feng Chen, Wei-Hsiang Lin National Taiwan University of Science and Technology, Taipei, Taiwan

a r t i c l e

i n f o

Article history: Received 5 March 2010 Received in revised form 11 May 2011 Accepted 31 May 2011 Available online 5 July 2011

a b s t r a c t Open Service Gateway Initiative (OSGi) and Open-source Computer Vision (OpenCV) are widely used for developing applications. OSGi is constructed to provide a service platform with high application interoperability while OpenCV is used to provide many application programming interfaces (APIs) about image processing. In this paper, we design a recognition assisted surveillance system based on OSGi and OpenCV platforms. The system features dynamic monitoring by a camera carried by a robot and a Java 2 Micro-Edition (J2ME) viewer on a mobile phone. With the assistance of image recognition techniques, the captured frames are adaptively reproduced for handheld phones in a limited bandwidth environment. The proposed adaptive pause time control mechanism can efficiently improve the synchronization relationship between captured and viewed frames across heterogeneous networks. The evaluation results show that the proposed scheme can save power for the moveable camera and have a shorter time delay between the captured and viewed frames. Ó 2011 Elsevier Ltd. All rights reserved.

1. Introduction The surveillance system makes a big progress with the assistance of Internet. The revolutionized system can make surveillance users monitor distant events by an Internet browser. Thus the remote live events can easily be seized at anywhere users can access Internet. If the monitoring application can be extended to a handheld device such as a cellular phone, surveillance users can acquire remote events at anytime and anywhere via any available mobile wireless network, such as the General Packet Radio Service (GPRS), Wireless Fidelity (Wi-Fi), and Universal Mobile Telecommunications System (UMTS) to the implementing Worldwide Interoperability for Microwave Access (WiMAX) or 3rd Generation Partnership Project Long Term Evolution (3GPP-LTE). On the other hand, the pre-installed cameras at fixed locations might have less flexibility. If the camera can be mounted on a movable device, a surveillance user may view more unseen corners which may not be caught by the fixed cameras. Furthermore, if the mobile camera and the mobile viewer/commander on cellular phone can be integrated on a common service platform, the service expandability can be achieved. However, the limited screen of mobile phones may hinder the user from clear recognizing the objects. In this article, we aim to implement a recognition assisted dynamic surveillance system by integrating the Open Services Gateway Initiative (OSGi) [1] open service platform and adding in the recognition function supported by the Open-source Computer Vision (OpenCV) [2] development platform. The system also includes a movable camera attached on an embedded system carried by a robot [3] and a viewing Java 2 Micro-Edition (J2ME) [4] application on a cellular phone.

q

Reviews processed and approved for publication to the Editor-in-Chief Dr. Manu Malek.

⇑ Corresponding author.

E-mail addresses: [email protected] (J.-S. Leu), [email protected] (H.-J. Tzeng), [email protected] (C.-F. Chen), [email protected] (W.-H. Lin). 0045-7906/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.compeleceng.2011.05.015

J.-S. Leu et al. / Computers and Electrical Engineering 37 (2011) 1182–1192

1183

The OSGi alliance was formed to conduct open specifications for delivering services to local networks and devices [5]. Recently, many developments relating middleware construction are based on the OSGi standard [6–8]. The OSGi service platform, a general-purpose, secure, managed Java software framework, expands Internet services to homes [9–11], automotive [12], health care [13] and so on. OSGi can play a good broker role to aggregate applications. On the other hand, the OpenCV development platform has been used to realize the common image processing and computer vision algorithms including face detection, object segmentation and recognition and so on. The OpenCV APIs facilitate developing recognition assisted functions in our surveillance system with a low development cost. Meanwhile, Lego MindStorms series products have been developed by LEGO Company for years to enable developers to have an inexpensive robotic environment. One of its major contributions is to build simple, portable, and inexpensive experiments for proving the concept of a mobile robot service in the development beginning by not having extensive lab facilities [14,15]. We use Lego MindStorms NXT to carry the movable camera in our system. Thanks to the wireless technology revolution, a wireless user can access Internet services at anytime, anywhere via ubiquitous networks by a multi-mode device. J2ME is a typical mobile based development technologies [16,17]. A J2ME based wireless intelligent video surveillance system was implemented to make a useful supplement of a traditional monitoring system with its good mobility [18]. Our work for implementing the dynamic surveillance system includes building Java based bundles on the OSGi service platform, invoking the OpenCV APIs for frame recognition, controlling Lego MindStorms NXT via a Bluetooth connection by the iCommand Java package and the RXTX library [19], delivering the captured frames to OSGi through a Java based socket, and using J2ME to develop a remote viewer on a multi-mode mobile phone. In such a dynamic surveillance system, captured frames are normally obtained through a higher speed network in a building and then delivered to the remote mobile users through a lower speed network outside the building. The mismatch of transmission capabilities in different networks would affect the viewing continuity and playback liveness. Hence, adaptively pausing some time for frame capturing based on the network quality can save power consumed by the mobile camera. Such a mechanism can efficiently make use of the limited energy on the mobile robot and also can make the remote viewer obtain the captured frames with live event information. The rest of this paper is organized as follows. Section 2 illustrates some related works about surveillance systems. Section 3 describes the system architecture of the proposed system. Section 4 depicts the operation flow on the system. In Section 5, we practically evaluate the performance of the proposed system in different capturing–viewing schemes and conduct a simple comparison with other surveillance systems. A brief conclusion is presented in Section 6.

2. Related work In a traditional surveillance system [20,21], many fixed cameras are pinned at fixed locations to capture events. Users then keep a close watch on the videos sent from these cameras via a monitor at a fixed location. Internet indeed facilitates the remote monitoring but it still lacks some flexibility. In the past, many efforts are put on improving the inflexibility in a traditional surveillance system. A web-based surveillance system in [22] was proposed for surveillance viewers to perform monitoring and controlling remotely by cellular phones. However, if the both sides – cameras and monitors can become movable, the monitoring area can become wider and the viewer can obtain the events remotely anytime and anywhere. A network camera equipped on a mobile robot [23,24] can help capture events in wider and more dynamic angles. As hardware costs are coming down dramatically and capabilities of robot are increasing fast, robotics is becoming more important in everyday life. A robotic dual-camera vision system [25] is built by composing a telephoto camera whose field of view can be moved within a larger view field of a wide-angle camera. Meanwhile, movable robots can be a good facility to broaden the monitoring vision by carrying the event capturing camera. An autonomous mobile robotic system [26] developed on a multisensory mobile platform can autonomously navigate in the environment and perform surveillance activities. With the integration of current wireless network technology, a robotic surveillance system can extend the security sensing within a home or building environment. The surveillance robot in [27] adopts the ZigBee protocol for wireless communication to become a mobile video sensor node in a ZigBee-based home control network. The 802.11 b-enabled RISCBOT [28] can support online users to easily navigate the environment with assistance of a visual recognition algorithm by a web-based interface. Robotics solutions adapted to the conditions of unstructured and unknown environment can improve safety and security of personnel [29]. However, most of the robotic techniques mentioned above are specially designed for some purpose. A more flexible solution is expected for future extension. Meanwhile, if the surveillance system can be constructed over an extensible platform, the system can be easily integrated with future applications. On the other hand, the unmatched bandwidth at both ends which are located in different network environments may incur a deferred playback at the viewer side, the transmitted frame rates should be adapted to the heterogeneous network conditions. Adaptive wireless multi-level ECN (AWMECN) in [30] is proposed to conduct rate adaptation and quality adaptation in the application layer for a better quality of service (QoS) on delivering multimedia over heterogeneous wireless networks. A more simple and intuitive way for the rate adaptation is adjusting the capturing period by a adaptive pause time control mechanism at the video input end. Following the above design issues, our work presented in this paper aims to design a recognition assisted dynamic surveillance system by fully integrating wireless, robotics, image processing, and mobile phone techniques.

1184

J.-S. Leu et al. / Computers and Electrical Engineering 37 (2011) 1182–1192

3. System architecture This section illustrates the conceptual processing flow and some main application modules in the proposed dynamic surveillance system. The system mainly contains the four steps which are shown in Fig. 1. The four steps are described as follows: (1) The live event can be captured by a camera mounted on a movable robot inside a building and then sent to an OSGibased gateway via a high speed network, such as a Wi-Fi network. (2) Due to different network transmission capabilities at the local and remote sides, regenerating recognition assisted frames on the gateway can make the viewer receive more recognizable frames on a limited screen of handheld phones in a limited bandwidth environment. The recognition functions can easily be developed and supported by OpenCV. (3) The processed frames are delivered to the remote mobile viewer by different types of mobile networks, such as a 3G or GPRS network. (4) If the remote viewer intends to change the capturing direction or angle, the viewer can use the handheld phone as a controller to remotely issue the control demands to the movable robot via the gateway. The system contains three main components which are shown in Fig. 2. 3.1. OSGi service platform The OSGi platform mainly comprises a Java Virtual Machine (JVM), a set of running applications called bundles, and an OSGi framework. A bundle is a minimal deliverable application in OSGi and managed by the OSGi framework. Meanwhile, a service is the minimal functioning unit in OSGi. Therefore, a bundle is designed as a set of cooperating services, which are discovered after being published in the OSGi service registry. There are four bundles and three services on the platform:  SurveillanceServerBundle: SurveillanceServerBundle is responsible for authorizing users, handling service requests including delivering captured videos to the remote viewer by invoking functions in CamService and controlling the MindStorms NXT as the remote viewer demands by invoking functions in NXTService.  CamBundle: CamBundle is responsible for registering, publishing CamService, which works for retrieving the remote surveillance videos, in OSGi.  NXTBundle: NXTBundle is responsible for registering, publishing NXTService, which works for controlling NXT, in OSGi.  FrameProcsessingBundle: FrameProcsessingBundle is responsible for registering, publishing FrameProcsessingService, which includes frame quality adjusting to save the transmission cost as well as object contouring and face detecting for the remote viewer to easily recognize the content in the captured images captured from the mobile camera by invoking the related APIs provided by OpenCV. Meanwhile, to meet the limited viewing screen size of mobile phone, the captured video frame dimension would be rescaled by FrameProcsessingService.

Fig. 1. The conceptual flow of the proposed dynamic surveillance system.

1185

J.-S. Leu et al. / Computers and Electrical Engineering 37 (2011) 1182–1192

Wi-Fi

Surveillance Server Bundle

CamBundle

Bluetooth

NXTBundle CamService

GPRS/3G/HSDPA/Wi-Fi

FrameProcessing Bundle FrameProcessing Serv ice

NXTService

OSGi JVM

RemoteViewer (J2ME)

iCommand

OS Fig. 2. The proposed dynamic system architecture based on OSGi and OpenCV.

Camera Wi-Fi NIC

ARM9 DMANAV2450

MindStorms NXT Fig. 3. A mobile camera mounted on a embedded system carried by a Robot.

3.2. Mobile camera Lego MindStorms NXT robot has its own digital camera accessory but the captured video quality by that camera is not good enough for remote viewing. Hence we mount a digital camera on an ARM-based embedded system, which is carried by the robot, to capture events at the locations where the remote viewer can ask the robot to move to. The camera can also capture events in a wider range in the up-and-down direction with the assistance of the raisable arm, which the camera is hooked on. The physical construction of the mobile camera is shown in Fig. 3. 3.3. Mobile phone J2ME is exploited to develop the controlling and viewing functions on a mobile phone. A viewer can use the keypad of mobile phone to control the moving direction of the mobile camera via NXTService as well as to retrieve the surveillance videos captured by the remote camera via CamService. 4. Operation flow About the operation flow of our system, the event is captured by the mobile camera first and sent back to the CamService on OSGi via a Wi-Fi network interface card. After the CamService gets the captured frame, it would invoke the FrameProcessingService to conduct an addition image process procedure including frame quality adjusting, object contouring, and face detecting. Due to the limited viewing screen on the mobile phone, the original captured video frame dimension would be rescaled to meet the screen size of mobile phone. The parameter for the frame size is auto-negotiated when the mobile device connects to SurveillanceServerBundle. Then the system would deliver the recognition assisted images, which are rescaled, to the mobile phone. Furthermore, we implement an asynchronous viewing mode on RemoteViewer. There are

1186

J.-S. Leu et al. / Computers and Electrical Engineering 37 (2011) 1182–1192

Fig. 4. Ideal synchronization relationship between captured and viewed frames.

two independent threads on OSGiGateway in such a mode. One thread is responsible for requesting a frame toward MobileCamera and the other one is for delivering the captured frame from MobileCamera to RemoteViewer. On a fire-and-forget basis, RemoteViewer does not need to repeatedly check if a new captured frame arrives. Ideally, every frame captured by MobileCamera is viewed by RemoteViewer right after a constant time interval without considering the captured frames are transmitted through different kinds of networks. Fig. 4 shows the ideal consistency relationship between captured and viewed frames. However, due to the different network transmission capabilities of the MobileCamera-to-OSGiGateway and OSGiGateway-to-RemoteViewer links which may be located in heterogeneous networks, the frames received by RemoteViewer may gradually get desynchronized with the frames captured by MobileCamera. In other words, the scene a remote reviewer views may lag far behind the live event. The desynchronization effect would get worse cumulatively as the execution time gets longer. Fig. 5 shows the desynchronization effect between captured and viewed frames in a real network environment. Therefore, we add a pause time control on the OSGiGateway to cancel this annoying effect caused by the asynchronous viewing mode on the mobile phone. That means the OSGiGateway is forced to wait for a pause time per frame request made to the MobileCamera. Such an improvement can make the remote viewer to obtain a real-time scene. However, how to choose a proper pause time is a trade-off issue. The desynchronization effect would still exist if the pause time is set too short, while the captured frame cannot be viewed timely at RemoteViewer if the pause time is set too long. We adopt an adaptive pause time setting mechanism based on the condition of the previous frame delivery. The adaptive pause time is set as (1) shows. The related process flow is shown in Fig. 6.

T p ðn þ 1Þ ¼ T RV ðnÞ  T MC ðnÞ  T SS ðnÞ  T proc ðnÞ;

ð1Þ

where T p ðn þ 1Þ: the pause time for the (n + 1)-th frame, T RV ðnÞ: the n-th frame’s receiving time gotten from Remote Viewer, T MC ðnÞ: the n-th frame’s capturing time at Mobile Camera, T SS ðnÞ: the n-th frame’s receiving time from Mobile Camera to OSGi Gateway, T proc ðnÞ: the n-th frame’s processing time at OSGi Gateway.

J.-S. Leu et al. / Computers and Electrical Engineering 37 (2011) 1182–1192

Fig. 5. Desynchronization effect between captured and viewed frames.

Its detailed operation algorithm is illustrated as follows:  At RemoteViewer side:

Initializes a reviewing request toward OSGiGateway; While Begin Receives the frame from the OSGiGateway and displays the frame; While End

1187

1188

J.-S. Leu et al. / Computers and Electrical Engineering 37 (2011) 1182–1192

Fig. 6. Synchronization relationship between captured and viewed frames with a pause time control mechanism.

 At OSGi Gateway Receives a command from RemoteViewer; While Begin Issues a command(‘‘VideoFrameRequesting’’) to the MobileCamera via CamService; Receives a frame from the MobileCamera via CamService; Conducts frame processing on the received frame; Sends the processed frame to the RemoteViewer; Wait for an adaptive PauseTime While End

J.-S. Leu et al. / Computers and Electrical Engineering 37 (2011) 1182–1192

1189

 At MobileCamera side: Receives a command from OSGiGateway; If (Command == ‘‘VideoFrameRequesting’’) Then Captures a frame; Sends the captured frame to the OSGi Gateway; End If

5. System evaluation and simple comparison To enhance the mobility and responsiveness, our surveillance system broaches the cellular or wireless transmission to surveillance image delivery between two mobile ends. Recognition assisted images captured which originally are captured by a mobile camera on NXT are sent to a mobile phone for viewing as Fig. 7 shows. The surveillance system would ask the mobile camera mounted to capture an image with an origin dimension – 320  240 (around 62 KB). After conducting an addition image process including frame quality adjusting, object contouring, and face detecting on the captured frame, the system then downsizes the frame dimension according to the screen size of the mobile phone and then sends it to the mobile viewer via a 3G network or a Wi-Fi network which the viewer attaches to. Our evaluation focuses the performance for different capturing–viewing modes (‘‘asynchronous’’ and ‘‘asynchronous with an adaptive pause time control mechanism’’) based on the different evaluation metrics (the time delay for frames sent from MobileCamera to RemoteViewer, the frame receiving rate of RemoteViewer at a fixed frame capturing rate at MobileCamera, the battery lasting time at MobileCamera) via different wireless networks (Wi-Fi, 3G). Some key parameters for the evaluation are shown in Table 1.  The time delay for frames sent from MobileCamera to RemoteViewer:

Fig. 7. System evaluation scenario.

1190

J.-S. Leu et al. / Computers and Electrical Engineering 37 (2011) 1182–1192

The time delay for receiving frames would affect the synchronization or liveness status between the captured frames from MobileCamera and the received frames at the remote viewer site. We put a timestamp showing the time when each frame is captured by MobileCamera. The timestamp would be compared to the time when the frame is received at the mobile viewer site. The time difference in between can make us realize the synchronization or liveness status between the captured and viewed frames through different networks with different transmission rates. Hence, the smaller the time delay for receiving frames, the better the frames synchronization between MobileCamera and RemoteViewer. Fig. 8 shows that the ‘‘asynchronous with an adaptive pause time control mechanism’’ mode can have a less average frame delay compared to the ‘‘asynchronous’’ mode (see Fig. 9).

Table 1 Parameters for the evaluation. Device

Sony Ericsson G900

Access network Access data rate Source frame dimension Scaled frame dimension

3G 160–320 kbps 320  240 235  210

Wi-Fi 4–6 Mbps

1600 1400

Asynchronous(Wi-Fi)

1200 Asynchronous+Adaptive Pause Time Control(Wi-Fi)

1000 800

Asynchronous(3G)

600 400

Asynchronous+Adaptive Pause Time Control(3G)

200 0 Average Frame Delay(ms) Fig. 8. Average frame delay comparison.

7 Asynchronous(Wi-Fi)

6 5

Asynchronous+Adaptive Pause Time Control(Wi-Fi)

4 3

Asynchronous(3G)

2 1 0

Needed Frame Capturing Rate(frames/sec)

Asynchronous+Adaptive Pause Time Control(3G)

Fig. 9. Needed frame capturing rate comparison.

160 Asynchronous(Wi-Fi)

140 120

Asynchronous+Adaptive

100

Pause Time Control(Wi-Fi)

80 Asynchronous(3G) 60 40

Asynchronous+Adaptive

20

Pause Time Control(3G)

0 Battery Lasting Time(min) Fig. 10. Battery lasting time comparison.

1191

J.-S. Leu et al. / Computers and Electrical Engineering 37 (2011) 1182–1192 Table 2 Comparisons of different surveillance systems. Surveillance systems

Traditional systems [20,21]

Systems with a mobile viewer [22]

Systems with network camera and a mobile viewer [23,24]

Robotic surveillance systems [27,28]

Our recognition assisted dynamic surveillance system

Camera Viewer/commander Access network Mobility Service extensibility

Fixed Fixed Fixed networks Low Low

Fixed Movable Fixed/wireless networks Medium Low

Movable Movable Wireless networks High Low

Movable N/A Wireless networks Medium Low

Movable Movable Wireless networks High High

 The needed frame capturing rate at the MobileCamera site: In a pure asynchronous viewing mode on RemoteViewer, the frame capturing rate at the MobileCamera site is independent of the one at the RemoteViewer site. However, since the network transmission capabilities of the MobileCamera-to-OSGiGateway and OSGiGateway-to-RemoteViewer links may differ, the frames received by RemoteViewer may gradually be out of synchronization with the frames captured by MobileCamera without considering the mismatch of transmission rates. That means if the surveillance system wants to keep good synchronization between the live scene and the viewed scene, it should adaptively control the frame capturing rate to prevent MobileCamera from capturing too many frames which would not be received and viewed by RemoteViewer. In other words, the frame capturing rate should be adaptively tuned according to the transmission rate in the OSGiGateway-to-RemoteViewer link. Hence, the smaller the needed frame capturing rate at the MobileCamera site, the less the wasted frames captured by MobileCamera. Fig. 8 shows that the ‘‘asynchronous with an adaptive pause time control mechanism’’ mode can have a less needed frame capturing rate compared to the ‘‘asynchronous’’ mode.  The battery lasting time at MobileCamera: The experience focuses on how long the battery can last for in different capturing–viewing modes when the 1150 mAh battery on Lego MindStorms NXT is fully charged at the MobileCamera site. The proposed pause time control mechanism can effectively reduce the redundant frames, which are captured by MobileCamera but not received and viewed by RemoteViewer. The energy at the MobileCamera site can therefore be saved. Hence, the longer the battery lasting time at the MobileCamera Site, the less the power consumed by MobileCamera. Fig. 10 shows that the ‘‘asynchronous with an adaptive pause time control mechanism’’ mode can have a longer battery lasting time compared to the ‘‘asynchronous’’ mode. Besides, we simply compare different surveillance systems in Table 2. With the assistance of wireless networks, the dynamic mobile surveillance system features high mobility including the camera end and the viewer/commander end. Mobile viewers can use their snazzy smartphones to issue commands to move the remote movable camera and to capture events at the remote site. Our system also can deliver motion images assisted by recognition processing back to the viewer instead of a still snapshotted image. Meanwhile, as an OSGi and OpenCV based system, which can facilitate the integration of future developed functions and many image processing APIs, the proposed system features high service extensibility as well. 6. Conclusion The recognition assisted dynamic surveillance system in this paper provides an integrated solution of surveillance service. In our development, OSGi is selected as the central service platform because of its future extensibility and application interoperability while OpenCV is chosen as the development assistance mechanism because of its mature APIs about the image processing. The integrated system can raise the mobility at two ends – a mobile user end to monitor and control the capturing and a mobile camera end to flexibly capture the event. Besides, the recognition function can assist in viewing the captured events in a limit screen on a mobile phone more clearly. We develop this system by an OSGi service platform, an OpenCV development platform, a camera mounted on an embedded system carried by a robot, and a J2ME based viewer and commander program on a mobile phone to realize this concept. In this article, we also propose a pause time control mechanism on the gateway to efficiently improve the synchronization relationship between captured and viewed frames across heterogeneous networks. References [1] [2] [3] [4] [5] [6] [7] [8]

‘‘OSGi’’, Available: http://www.osgi.org/, 2011. ‘‘OpenCV’’, Available: http://opencv.willowgarage.com/wiki/, 2011. ‘‘Lego MindStorms NXT’’, Available: http://mindstorms.lego.com/, 2011. ‘‘Java 2 Platform, Mobile Edition’’, Available: http://java.sun.com/javame/index.jsp, 2011. Marples D, Kriens P. The open services gateway initiative: an introductory overview. IEEE Commun Mag 2001;39(12):110–4. Lee C, Nordstedt D, Helal S. Enabling smart spaces with OSGi. IEEE Pervasive Comput 2003;2(3):89–94. Gu T, Pung HK, Zhang DQ. Toward an OSGi-based infrastructure for context-aware applications. IEEE Pervasive Comput 2004;3(4):66–74. Valtchev D, Frankov I. Service gateway architecture for a smart home. IEEE Commun Mag 2002;40(4):126–32.

1192

J.-S. Leu et al. / Computers and Electrical Engineering 37 (2011) 1182–1192

[9] Wu CL, Liao CF, Fu LC. Service-oriented smart-home architecture based on OSGi and mobile-agent technology. IEEE Trans Syst Man Cybern Part C: Appl Rev 2007;37(2):193–205. [10] López de Vergara JE, Villagrá VA, Fadón C, González JM, Lozano JA, Álvarez-Campana M. An autonomic approach to offer services in OSGi-based home gateways. Comput Commun 2008;31(13):3049–58. [11] Lin CL, Wang PC, Hou TW. A wrapper and broker model for collaboration between a set-top box and home service gateway. IEEE Trans Consumer Electron 2008;54(3):1123–9. [12] Li Y, Wang F, He F, Li Z. OSGi-based service gateway architecture for intelligent automobiles. In: IEEE symposium on intelligent vehicles, Las Vegas, NV, USA; 2005. p. 861–5. [13] Chen IY, Tsai CH. Pervasive digital monitoring and transmission of pre-care patient biostatics with an OSGi, MOM and SOA based remote health care system. In: Sixth annual IEEE international conference on pervasive computing and communications, Hong Kong, China; 2008. p. 704–9. [14] Heck BS, Clements NS, Ferri AA. A LEGO experiment for embedded control system design. IEEE Control Syst Mag 2004;24(5):61–4. [15] Kim SH, Jeon JW. Introduction for freshmen to embedded systems using LEGO mindstorms. IEEE Trans Educat 2009;52(1):99–108. [16] Read K, Maurer F. Developing mobile wireless applications. IEEE Internet Comput 2003;7(1):81–6. [17] Kochnev DS, Terekhov AA. Surviving java for mobiles. IEEE Pervasive Comput 2003;2(2):90–5. [18] Xu L, Wang Z, Wang H, Shi A, Li C. A J2ME-based wireless intelligent video surveillance system using moving object recognition technology. Congress on image and signal processing, vol. 2, Sanya, China; 2008. p. 281–5. [19] RXTX, Available: http://www.rxtx.org/, 2011. [20] Foresti GL. A real-time system for video surveillance of unattended outdoor environments. IEEE Trans Circuits Syst Video Technol 1998;8(6):697–704. [21] Esteve M, Palau CE, Martinez-Nohales J, Molina B. A video streaming application for urban traffic management. J Network Comput Appl 2007;30(2):479–98. [22] Imai Y, Hori Y, Masuda S. Development and a brief evaluation of a web-based surveillance system for cellular phones and other mobile computing clients. In: Conference on human system interactions, Krakow, Poland; 2008. p. 526–31. [23] Tseng YC, Wang YC, Cheng KY, Hsieh YY. IMouse: an integrated mobile surveillance and wireless sensor system. Computer 2007;40(6):60–6. [24] Park JH, Sim KB. A design of mobile robot based on network camera and sound source localization for intelligent surveillance system. In: International conference on control, automation and systems, Seoul, Korea; 2008. p. 674–8. [25] Tonet O, Focacci F, Piccigallo M, Mattei L, Quaglia C, Megali G, et al. Bioinspired robotic dual-camera system for high-resolution vision. IEEE Trans Robot 2008;24(1):55–64. [26] Paola DD, Milella A, Cicirelli G, Distante A. An autonomous mobile robotic system for surveillance of indoor environments. Int J Adv Robot Syst 2010;7(1):19–26. [27] Song GM, Yin KJ, Zhou YX. A surveillance robot with hopping capabilities for home security. IEEE Trans Consumer Electron 2009;55(4):2034–9. [28] Patel S, Sanyal R, Sobh T. RISCBOT: A WWW-enabled mobile surveillance and identification. J Intelligent Robot Syst 2006;45(1):15–30. [29] Habib MK, Baudoin Y. Robot-assisted risky intervention, search, rescue and environmental surveillance. Int J Adv Robot Syst 2010;7(1):1–8. [30] Karimi OB, Fathy M. Adaptive end-to-end QoS for multimedia over heterogeneous wireless networks. Comput Elect Eng 2010;36(1):45–55. Jenq-Shiou Leu received his B.S. degree in Mathematics and his M.S. degree in Computer Science and Information Engineering from National Taiwan University, Taipei, Taiwan, in 1991 and 1993, respectively. He was with Rising Star Technology, Taiwan, as a R&D Engineer from 1995 to 1997, and worked in the telecommunication industry (Mobitai Communications and Taiwan Mobile) from 1997 to 2007, as an Assistant Manager. He obtained the Ph.D. degree on a part-time basis in Computer Science from National Tsing Hua University, HsingChu, Taiwan, in 2006. In Feb. 2007, he joined the Department of Electronic Engineering at National Taiwan University of Science and Technology, Taipei, Taiwan, as an Assistant Professor. He becomes an Associate Professor since Feb. 2011. His research interests include mobile services over heterogeneous networks, heterogeneous network integration and P2P networking. Hung-Jie Tzeng received his B.S. degree in Electronic Engineering from National Taiwan University of Science and Technology, Taipei, Taiwan. He currently is a M.S. student in the Electronic Engineering department of National Taiwan University of Science and Technology, Taipei, Taiwan. His research interests include smart home technology and wireless location positioning. Chi-Feng Chen received his B.S. degree in Electronic Engineering from National Taiwan University of Science and Technology, Taipei, Taiwan. He currently is a M.S. student in the Electronic Engineering department of National Taiwan University of Science and Technology, Taipei, Taiwan. His research interests include smart home technology and JAVA based application integration. Wei-Hsiang Lin received his B.S. degree in Computer and Communication Engineering from National Kaohsiung First University of Science and Technology, Kaohsiung, Taiwan and his M.S. degree in Electronic Engineering from National Taiwan University of Science and Technology, Taipei, Taiwan. He currently is a Ph.D. student in the Electronic Engineering department of National Taiwan University of Science and Technology, Taipei, Taiwan. His research interests include heterogeneous network integration and smart home technology.