In this research, we will identify the gap between Enterprise requirements and traditional relational database capabilities to look for other database solutions. We will explore the new technology NoSQL data management for big data to identify the best advantage. We will gain an insights into how technology transitions in software, architecture, and process models are changing in new ways.
Effectively extracting reliable and trustworthy information from Big Data has become crucial for large business enterprises. Obtaining useful knowledge for making better decisions to improve business performance is not a trivial task. The most fundamental challenge for Big Data extraction is to handle with the data certainty for emerging business needs such as marketing analysis, future prediction and decision making.
It is clear that the answers of analytical queries performed in imprecise data repositories are naturally associated with a degree of uncertainty. However, it is crucial to exploit reliability and accurate data for effective data analysis and decision making. Therefore, this project is to develop and create new techniques and novel algorithms to extract reliable and useful information from massive, distributed and large-scale data repositories.
OLAP is based on a multidimensional data model for complex analytical and ad-hoc queries with a rapid execution time. Those queries are either routed or on-demand revolved around the OLAP tasks. Most such queries are reusable and optimized in the system. Therefore, the queries recorded in the query logs for completing various OLAP tasks may be reusable.
The query logs usually contain a sequence of SQL queries that show the action flows of users for their preference, their interests, and their behaviours during the action. This research project will investigate the feature extraction to identify query patterns and user behaviours from historical query logs.
The expected results will be college essay for sale to recommend forthcoming queries to help decision makers with data analysis. The purpose of this research is to improve the efficiency and effectiveness of OLAP in terms of computation cost and response time. The challenges for big data analysis include investigation, collection, visualization, exploration, distribution, storing, transmission, and security.
The development to big data sets is due to the additional information derivable from analysis of large set of related data and allow data correlations to be created to becoming useful information and knowledge. For more details please contact Phoebe. Bar code readers are used in various applications ranging from supermarket checkouts to medical devices.
Bar codes are also incorporated into exhibit labels and evidence bags. This project will examine techniques to restore partial barcodes and develop a money does buy happiness essay to ensure results obtained are valid.
This project we try to focus on speech-based correlates of a variety of medical situations using automatic, signal processing and computational methods. If such speech indications can be recognized and quantified automatically, this information can be used to carry diagnosis and treatment of medical circumstances in clinical settings and to additional deep research in thoughtful cognition.
This research will explore features extracted from the audio signal and research that will present some applied research that uses computational methods to develop assistive and adaptive speech technologies. Recently, Nintendo Wii hand held remote controllers and Wii fit balance board can support physical activity, movement, balance and health at home. What is the helpfulness of virtual reality systems on physical activity limitations?
Imaging technologies such as Magnetic Resonance Imaging and Ultrasonography are allowing researchers the opportunity to investigate image structures. This will give us the chance to diagnostic disease and health thought image analysis. More details please discuss it with Phoebe. Prokaryotes, single-celled organisms like bacteria, do not have an enclosed nucleus, therefore their DNA is floating around in the cytoplasm. This project will use computational techniques to analyse DNA sequences to assess supercoiling in the context of packing large amounts of DNA and its implications for 3D structures.
Machine learning, profile generation and statistical techniques are combined to generate a suite of predictive tools for the Bioinformatic community. The comparative genomics approach compares two or more genomes the total heritable portion of an organism. Traditional visual presentations have centered on linear tracks with connecting lines to show points of similarity or difference. In this project you will overlay large amounts of comparative data on a set of 3D surfaces which are controlled and interfaced by using human interaction, like the Xbox Kinect.
This project requires you to develop a web application that will be used by students and teachers to help determine how concepts are being understood by the class. On the presenters screen, the lecturer will have a panel showing how the class is understanding the content they are teaching.
Significant literature analysis of existing techniques in this research area would be a feature of the project. Recently, in an effort to improve the performance of wireless networks, there has been increased interest in protocols that rely on interactions between different layers of OSI layer architecture. Recently, the H. This project will focus on the transmission issues of H. The cross-layer architecture can be interactions between Application layer and MAC layer primitives. Based on the QOS requirements of different data partitions in H.
These networks are interconnected with devices such as video, voice and still images and are connected to a literature review pay site for data and video analysis. The processing and quality of video and audio will be a challenging factor, especially with low powered sensor nodes. Existing solutions, frameworks, and design implementations using test beds and simulations will be investigated. The more open issues and open research issues at the application, transport, network, link, and physical layers of the communication protocol stack are also investigated.
The algorithm uses the advantages of traditional FEC schemes, which normally fixes the errors occurring within the information packages before they occur. Using simulation experiments, our work has shown that the adaptive FEC algorithm improves the performance of the system by dynamically tuning FEC strength to the current amount of wireless channel loss.
Wireless vehicular communications for ITS is one of the most interesting and active research topics, which is requiring vital efforts from both the industry and the academic. In particular, studies on network routing and communication algorithm for V2V and V2I have posed various challenges. These challenges require developing new network routing protocols and design communication algorithms, especially for IP data communications.
The rapid advances in the recent years in the areas of integrated circuit electronics, wireless communication and micro-electromechanical systems have led to the emergence of the wireless sensor network technology. Though the application domain, extant and envisaged, of wireless sensor networks WSNs is wide, there are a few common characteristics:. WSNs are almost always single application systems. The nodes in a WSN co-operate towards the goal of the application; the nodes do not compete for resources.
The protocols used in a WSN, therefore, are designed with objectives which differ from the objectives of the protocols in other computer networks. The nodes discover their neighbours and build the topology distributed algorithms using local knowledge. WSN nodes resource constrained. In order to keep the size and the cost of the nodes down, the nodes have limited processing power, memory and radio range.
However, the resource constraint which has the most significant impact on many WSNs is the constraint on energy. WSN nodes are battery operated. Many wireless sensor networks are deployed in locations where battery replacement is not feasible. A node has to be discarded when the battery depletes. Energy scavenging may alleviate this problem in some sensor networks.
Most WSN protocols are very conscious of the limited supply of energy, and try to conserve energy. A medium access control protocol allows the nodes in a neighbourhood nodes within a radio range to access the communications medium without interfering with each other.
This may require monitoring communication in the neighbourhood, and communicating with neighbours even when no data is to be communicated. As stated earlier, in a WSN, energy expenditure needs to be kept low while carrying out the activities required for medium access control. A common strategy for energy conservation in WSNs is to allow the nodes to turn off their radio systems entering a sleep mode periodically, as the radio in a WSN node is major energy consumer.
However, when two neighbouring nodes need to communicate, they both must be awake. One way to achieve this, as proposed in [1,2] is for one of the two neighbours to poll the other one to set up a rendezvous. Another mechanism is to have all nodes in a neighbourhood to have a sychronised periodic sleep patterns proposed in [3,4].
The sychronised sleep pattern scheme proposed in  is used in the popular Mica and Telos motes commercially produced by Crossbow. However, creating and maintaining a single sleep schedule requires overcoming a number of difficulties , including the problem of designing a distributed algorithm for merging clusters of nodes following different sleep schedules. The projects that I offer are suitable to students who have a phd thesis on medical physics interest in software engineering and software project management.
Software size is important in the management of software development because it is a generally reliable predictor of project effort, duration, and cost. For example, SLOC can only be accurately counted when the software construction is complete, while the most critical software estimations need to be performed before construction. The FP can only be manually counted, and the estimator has to have special expertise and experience to do so.
Furthermore, FP counting involves a degree of subjectivity. Facing these challenges, researchers are looking for faster, cheaper, and more effective methods to estimate software size.
This project is to investigate the use of UML as a software sizing technique. In the last few years, the software engineering community has witnessed the growing popularity of Component-Based Development CBDrefocusing software development from core in-house development to the use of internally or externally supplied components.
Component-Based Software Engineering CBSE as an emerging discipline is targeted at improving the understanding of components and of systems built from components and at improving the CBD process itself.
The field of Software Process Improvement SPIand in particular of assessment-based software process improvement, shares very similar goals to CBSE — shorter time-to-market, reduced costs and increased quality — and provides a wide spectrum of approaches to the evaluation and improvement of software processes.
Although this discipline has made considerable advances in the standardization of these approaches e. Requirements Engineering RE consists of eliciting stakeholders need, refining the acquired needs into non-conflicting requirement statements and validating these requirements with stakeholders. Components are designed according to general requirements.
As such, the needs of stakeholders should be continually negotiated and changed according to the features offered by components. In addition, CBSD requirements need not be complete as initial incomplete requirements can be progressively refined as suitable components can be found. It reduces the scope of requirement negotiation; and makes it difficult to address quality attributes and system level concerns.
In addition, components are selected on individual bases which make it difficult to evaluate how components fit in with the overall system requirements. CBSD requirements are collected as high level needs, and are then modelled by identifying the importance of each need. Each need is identified as mandatory, important, essential or optional. This project is to investigate into having a systematic process of refining these requirements by specifying candidate job request letter example. You may wonder when we type the question "What is the weather today in Melbourne?
A QA system contains a number of components. The first component is Query Expansion. The research project is to analyse the query being entered by the user, to expand it by adding synonyms, to identify the key words within the query, and finally to decide the precise meaning of each key word.
Word Sense Disambiguation techniques will be applied to the research. The World Wide Web contains a huge number of documents which express opinions, containing comments, feedback, critiques, reviews and blogs, etc. These documents provide valuable information which can help people with their decision making. For example, product reviews can help enterprises promote their products; comments on a policy can help politicians clarify their political strategy; event critiques can help the involved parties reflect on their activities, etc.
However, the number of these types of documents is huge, so it is impossible for humans to read and analyse all of them. Thus, automatically analyzing opinions expressed on various web platforms is increasingly important for effective decision making. The task of developing such a technique is sentiment analysis or opinion mining. In how to format a bibliography project, we attempt to analyse the sentiment orientation of a sample by identifying the connectives and phrases in its text.
As a result, the keyword which expresses the sentiment orientation of the author can be identified.
The method is to be compounded with classical analysis methods machine learning based or clustering based to achieve a higher accuracy. The purpose of the research is to establish a new scheme in knowledge representation — Natural Language Independent Knowledge Representation.
A concept can be implemented as a class in JAVA programming language. The class hierarchy can be established through the inheritance relationship. Attributes in the class define the relations between concepts.
The scheme can be applied to Natural Language Processing, Sentiment Analysis and Question-Answering Systems to serve as a tool for identifying the precise meaning of a word, and consequently to achieve Word Sense Disambiguation.
Surveys, or questionnaires, are a very common means to obtained information in scientific and social investigations. Typically, the data are entered into a number of data files e.
As the work progressed, to test some hypotheses, or to perform some exploratory analysis, new data files often had to be prepared. This approach is very time-consuming and error-prone. In fact, this project serves two related purposes: SBVR Semantics Business Vocabulary and Rules is the comprehensive standard for defining the vocabulary and rules of application domains.
That is, the aim of SBVR is to capture and represent all the business concepts vocabulary and all the business rules. The importance of business rules is that they drive the business activities and they govern the way the business software system behaves. In other words, the concepts and rules captured by SBVR represent the business knowledge required to understand the business and to build software systems to support the business.
The aim of the thesis is to study the SBVR standard in depth, to survey the works that have been published since the release of the Standard, and to critically evaluate the applicability of SBVR to practical information system development. This is a very important task for building business-rule-driven information system. Typically, the process for building such a system starts with building an SBVR model, and then translates that model into a UML model, which is more suitable for practical implementation.
The approach proposed for this thesis consists of the following steps: The aim of web services is to make data resources available over the Internet to applications programs written in any language. There are two approaches to web services: RESTful Web services have now been recognized as generally the most useful methods to provide data-services for web nursing research essays mobile application development. The aim of the thesis is to study the concept of RESTful web services in depth and to construct a catalogue of patterns for designing data-intensive web services.
The aim of the catalogue is to act as a guide for practical design of web services for application development. The rationale behind this research is a need for a practical system that can be used by students to select subjects during writing aid study.
While the advice of course coordinator and the short description of the subject in the handbook are most frequently used by students to make up their mind, they can make more informed decisions by using experience of past students. In this thesis, the student will use Case Based Reasoning CBR to design and develop a recommender system for subject selection in higher education context.
The research component of this project is the identification and validation of the CBR approach and its parameters for the recommendation system. In this study, the student will select one type of improper behaviors in OSNs cyber-bullying, cyber-stalking, hate campaign etc. The outcome of this research is a strategy or a policy that can be considered by OSNs providers. Constructive alignment CA is a subject design concept used in higher education sector. In this thesis, the student will review educational technology methods and tools that have been used in higher education sector.
Data stream mining is today one of the most challenging research topic, because we enter the data-rich era. This condition requires a computationally light learning algorithm, which art dissertation questions scalable to process large data streams.
Furthermore, data streams are often dynamic and do not follow a specific and predictable data distribution. A flexible machine learning algorithm with a self-organizing property is desired to overcome this situation, because it can adapt itself to any variation of data streams. Evolving intelligent system EIS is a recent initiative of the computational intelligent society CIS for data stream mining tasks. It features an open structure, where it can start either from scratch with an empty rule base or initially trained rule base.
Its fuzzy rules are then automatically generated referring to contribution and novelty of data stream. In this research project, you will work on extension of existing EISs to enhance its online learning performance, thus improving its predictive accuracy and speeding up its training process.
Research directions to be pursued in this project is to address the issue of uncertainty in data streams. The era of big data refers to a scale of dataset, which goes beyond capabilities of existing database management tools to collect, store, manage and analyze. Although the big data is often associated with the issue of volume, researchers in the field have found that it is inherent to other 4Vs: Variety, Velocity, Veracity, Velocity, etc. Various data analytic tools have been proposed.
The so-called MapReduce from Google is among the most widely used approach. Nevertheless, vast majority of existing works are offline in nature, because it assumes full access of complete dataset and allows a machine learning algorithm to perform multiple passes over all data. In this project, you are supposed to develop an online parallelization technique to be integrated with evolving intelligent system EIS. Moreover, you will develop a data fusion technique, which will combine results of EIS from different data partitions.
Existing machine learning algorithm is always cognitive in nature, where they just consider the issue of how-to-learn. One may agree the learning process of human being always is always meta-cognitive in nature, because it involves two other issues: Recently, the notion of the metacognitive learning machine has been developed and exploits the theory of the meta-memory from psychology.
The concept of scaffolding theory, a prominent tutoring theory for a student to learn a complex task, has been implemented in the metacognitive learning machine as a design principle of the how-to-learn part. This project will be devoted to enhance our past works of the metacognitive scaffolding learning machine. It will study some refinements of learning modules to achieve better learning performances. Undetected or premature tool fsu admissions essay requirements may lead to costly scrap or rework arising from impaired surface finishing, loss of dimensional accuracy or possible damage to the work-piece or machine.
The issue requires the advancement of conventional TCMSs using online adaptive learning techniques to predict tool wear on the fly. The cutting-edge learning methodologies developed in this project will pioneer frontier tool-condition monitoring technologies in manufacturing industries. Today, we confront social media text data explosion.
From these massive data amounts, various data analytic tasks can be done such as sentiment analysis, recommendation task, web news mining, etc. Because social media data constitute text data, they usually involve high dimensionality problem.
For example, two popular text classification problems, namely 20 Newsgroup and Reuters top have more than 15, input features. Furthermore, information in the social media platform is continuously growing and rapidly changing, this definitely requires highly scalable and adaptive data mining tools, which searches for information much more than the existing ones used to do — evolving intelligent system.
The research outcome will be useful in the large-scale applications, which go beyond capabilities of existing data mining technologies. This project will not only cope with the exponential growth of data streams in the social media, but also will develop flexible machine learning solution, which adapts to the time-varying nature of mba essay writers social media data.
Big data is too large, business management dissertation tesco fdi and complex to capture, analyse and integrate by using the currently available computing tools and techniques. Big data collection, integration and storing are the main challenges of this project as the integration and storing of big data requires special care. Consequently, it is necessary to prevent possible data loss in between the collection and processing, as big data always comes from a great verity of sources, including the high volume of streaming data of dynamic environmental data e.
As such, it opens new scientific research directions for the development of new underlying theories and software tools, including more advanced and specialized analytic. However, most of the big data technologies today e.
In order to integrate big data from various sources with different variety and velocity and build a central repository accordingly, it is increasingly important to develop a new scientific methodology, including new software tools and techniques. In particular, the main focus of this project is to capture, analyse and integrate big data from different sources, including dynamic streaming data and static data from database.
Towards this end, Government data can be used to analyse and develop applications and tools which can ensure benefit to the society. For example, data. In recent years, electronic health services are increasingly used by patients, healthcare providers, healthcare professionals, etc.
Healthcare consumers and providers have been using a verity of such services via short speech on deforestation technologies such as desktop, mobile technology, cell phone, smartphone, tablet, etc. For example, eHealth service is used in Australia to store and transmit the health information of the users in one secure and trusted environment.
However, security is still a big challenge and central research issue in the delivery of electronic health services.
For example, in an emergency situation i. In addition to security issue, privacy is also a concern that should neo be compromised, especially when there is a need to ensure security.
The main aim of this project is to enable online right-time data analysis and statistical functions to generate the different reports that are required for collaborative decision making.
This collaborative DSS will be built on an underlying integrated data repository which captures the different data sources relevant to the different organisations in the collaborative environment. Within the DSS, some measurements relevant to individual organisation e. The main focus of the collaborative decision support system is the availability of heterogenous consolidated data at the right time and right place.
With the increase popularity large heterogenous data repository and corporate data warehousing, there is a need to increase the efficiency of queries used for analysis. This case is even stronger in database environment that holds both spatial and temporal information. Spatio-Temporal data includes all time slices pertinent to each object or entity.
However, for each particular area there will be spatial information coordinates, shape, etc and time slice when a set of values for the above properties are valid. The main focus of this topic is to investigate the ways to optimize queries that are used to analyse the above spatio-temporal data.
There is a do documentation essay one liner by Donald Rumsfeld. One of the big problems faced by designers is. So, what does this mean for system development and design? Can this be formalized? Do we do it already? Where does Domain Expertise come into this? Technology is changing, however, the small system you build today may still be in use in 50 years from now. OK, you are the Government of a country developing a new Social Welfare system .
You want it to survive for the next 50 years. What exactly does this mean? How would you do this? FACT 1. Very expensive systems survive for decades even if or especially when they are mission critical.
Find some important examples. FACT 3. We know a lot about component based design, software re-use and related issue. How do we bring all this together so that systems can deal with change? They involve searches across a very wide range of web pages in a wide range of sources. The searcher may down load pages, extract information from pages, and, in the process, create a history of link activations.
The problem people face is what happens if the searcher has to stop, and resume the process days later. Purpose of this project is to provide support for people using Google as a search engine. Coding, compression and security This line of research focuses on information theory, in its broadest sense. It works on channel coding linear and non-linear error-correcting codessource coding hyperspectral and medical image compression, with the JPEG and CCSDS standardsand cryptography and network and information security security and privacy in distributed environments.
Artificial intelligence The objective of this line of research is to delve deeper into fundamental aspects of symbolic artificial intelligence, namely: Some research topics are: Intelligent agents, automatic learning, logic for artificial intelligence, approximate reasoning, heuristic search, agreement technologies and electronic institutions.
High performance computing The evolution of parallel and distributed computing systems has led to great advances in many areas of research and development. Right now, High Performance Computing is key to many fields of science and engineering. The main objective of this line is to conduct research into resource management policies, and also new programming paradigms and performance-adjusting tools aimed at increasing the efficiency of multidisciplinary applications bioinformatics, health, fire prediction, etc.
PhD in Computer Science. Lines of research In this section you will find all the lines of research on this PhD programme and its thesis supervisors. Lines of research and thesis supervisors Academic tutors You can consult in detail the possible thesis supervisors of the programme, their research lines and their contact emails here. Need more information? Contact the programme manager filling in this form:
Lines of research and thesis supervision Thesis supervision Thesis tutoring. In this section you will find all the lines of research on this PhD programme and its thesis supervisors.
You will also find, in the right-hand column, the staff who can act as academic tutors. Your thesis must have a single tutor, phd research thesis computer science may or may not be the same person as the thesis supervisor.
You can find detailed information on the functions and responsibilities of supervisors and tutors by clicking on the menu options Thesis supervision and Thesis tutoring. You can consult in detail the possible thesis supervisors of the nejm review articles, their research lines and their contact emails here.