Products -> Semantic Analysis for Vacation Rental / Apartment Reviews
Semantic Analysis for Vacation Rental / Apartment Reviews consists of 142 dedicated Semantic Models. Each Semantic model was especially designed, built, tested, and re-tested on hundreds of thousands of vacation rental, apartment and other accommodation reviews from 10 different sources. All presented Semantic Models work with an unparalleled precision of 90-95%.
Below, you will find a high-level view of all semantic models designed for apartment reviews.
In other words, what type of information do we garner from reviews.
Client loyalty & Recommendations
Will he come back? vs. not come back?
Will he recommend it? vs. not recommend?
Overall good/satisfied. vs. disappointed?
Is it recommended for: Couples? Family with kids?
Apartment Opinions
How was the apartment in general? Clean? Dirty? Renovated? Outdated? Were stairs too steep?
Was everything as advertised?
Garden/Terrace available/not available?
Critical opinions / Alerts for the owners
Bed bugs? Theft? Felt secure?
Apartment was not as advertised?
Dirty apartment? Does the host respect privacy?
Rooms&Bedrooms Opinions
Rooms - Spacious? Clean? Did everything work fine?
Bed - Comfortable? Uncomfortable?
Living Room: Ok? Big? Tiny?
Was there adequate sleeping spaces?
Location Opinions
Opinion In General - Is it a good location?
Close to: City center? Public transportation? Restaurants? Shopping? Airport? Highway? Nature?
Neighborhood - Is it a good neighborhood? Is it safe?
Bathroom&Kitchen Opinions
Kitchen: Amenities available/not available?
Bathroom: Was it clean? Spacious or small? Has good amenities? Was it shared or private?
Shower was alright? Is pressure good?
Accessibility Opinions
Handicap accessible or complaints?
Bathrooms/Stairs are handicap accessible?
Were keys at the location? Do they have a keypad entry?
Does it have parking? Is it ok? Is it big? Is it easy to find a space?
Host Opinions
Was the host polite? Friendly? Well informed? Did he respond quickly? Did he respect privacy? Was he at the location? Does he have a pet? Or maybe lack of any interactions?
Price/Payment Opinions
Problem with payments?
Problem with refund?
Additional fees? Is a deposit necessary?
Was it expensive? Cheap? Was price adequate or overpriced?
Amenities Opinions
Detergents available/not available?
Laundry machine available/not available?
Washing machine available/not available?
Complementary wine/fruits/snacks/food?
If you want to get custom-made private API with different models, or receive more info about the custom On-Premise version with 0 cents/text (BIG DATA Compatible)
First, you process all reviews of a given apartment and extract important information with high accuracy. Then, it is easy to compare these results with other apartments, and valuable patterns emerge.
We analyzed all 587 of your reviews, compared with 250 000 reviews of similar businesses and extracted 9 Recommendations/Tips how you can improve your business:
Compare your parameters with similar businesses
Here, we focused on showing classified negative opinions critical to apartment owners and presented on a brand reputation dashboard.
By clicking on specific semantic category (here: Noisy rooms) we get specific sentences/segments and "not the whole review" where people mentioned noisy rooms. We detect all the information that is correlated to noisy rooms (e.g. cleaner is banging around), not just with keywords (e.g. noisy, loud). This is the advantage if you use Human-like Analysis (Semantic Analysis) instead of keywords.
Search based on your individual preferences matched with opinions of similar people to you and data from reviews
What is important to you? Personalize your search
We remember these parameters while reading millions of reviews and find you the best possible place
Our AI deeply analyzed 3 million reviews from Berlin to match your personal needs:
Meet the Alchemist Traveler. He has been everywhere and he has read all reviews in the world (and he remembers them all).
He is patient and he can spare a moment to plan your perfect vacation.
Example 1:
Example 2:
Example 3:
Instead of reading thousands of reviews individually, you can see a semantic summary and get a feeling for what is contained within them in 20 seconds. You can click for specific information and see extracted sentences from reviews.
Apartment
apartmentrenovated 23 clean 15Rooms&Bedrooms
roomclean 34 spacious 22 well equiped 15 sth broken 4Bathroom&Kitchen
bathroomwell equipped 23 shower problem 19 clean 15 outdated 13Host
service keys not at the location 15 check-in slow 12Price
apartmentoverpriced 23 adequate price 4Location
close tohighway 7 city center 5 public transportation 4Amenities
Laundry machineavailable 13Other
deal breakersnoneThe automatic summary made from reviews - after processed by Semantic Analysis for Apartment Reviews (no additional NLP/NLU/ML or post-processing needed).
The first no-block title (Apartment, Price etc.) is a Category from our API, blue blocks (Bathroom, Payment etc.) are Aspects, green and red blocks (outdated, well equipped etc.) is the Feature and Polarity (Feature is text, Polarity is color).
See the live demo of semantic summary made from hotel reviews (powered by our technology).
All displayed results make sense. By clicking on specific information, you can display actual sentences (not whole reviews).
We analyzed 590 Reviews and organized them for you
(so you do not have to read them all)
People was the most satisfied with: | |
Complementary snacks | |
Clean Bathroom | |
Terrace available | |
Well-Equiped Room | |
Detergents available | |
People complained about: | |
Keys not at the location | |
Unfriendly Host | |
Handicap not accessible | |
show me more... |
Client loyalty analysis
Overall good/satisfied vs. Overall dissapointed
Will come back vs. Will not come back
Will recommend vs. Will not recommend
Information extracted by 142 Semantic Models is precise enough to create automatic text summary.
Natural Language Generation Summary from Reviews could look like this:
Our AI deeply analyzed all your reviews for the past 3 months, combined them with reviews of similar businesses and created this summary:
Most of people (88%) was satisfied with their stay, a lot of people (47 mentions) wrote that they would come back and will recommend this apartment (24 m.).
Reviewers state that rooms in this apartment are clean (34 mentions) and spacious (22 m.) and kitchen has all the amenities (15 m.). They also were satisfied with garden (17 m.) and the terrace (15 m.).
The majority found the host helpful and friendly (35 m.) but some stated that they were unprofessional (19 m.). People also complained about the slow check-in process (20 m.) and that the keys were not at the location (11 m.).
There were also some issues with the Neighbors (23 m.) and loud noises at the night (17 m.).
Generally, people say it is a good value for money (42m.). Some said it is expensive (9 m.) but also some said it is cheap (14 m.)
read more...
Semantic distribution summary out of text: | |
Overall good/satisfied | |
Would come back | |
Will Recommend | |
Clean Rooms | |
Spacious Rooms | |
Kitchen has amenities | |
Garden available | |
Terrace available | |
Friendly & Helpful Host | |
Unprofessional Staff | |
Unprofessional Staff | |
Slow Check-in | |
Keys not at the location | |
Issues with Neighbours | |
Noisy Apartment | |
Expensive Apartment | Cheap Apartment |
If you would like to talk more about these solutions, or discover what is possible
Example apartment review:
Simplified output after processing by Semantic Analysis for Apartment Reviews:
1. | We got this apartment because of its cheap price and near to the centre |
close to
city center +1
apartment inexpensive +1 |
2. | The apartment was a little bit small and the bed was not that comfortable |
apartment
small -1
bed uncomfy -2 |
3. | There was a lot of lovely decorations and it was nice |
decor
nice +1
|
4. | There was very fast wifi, which was great |
wifi
works good +2
|
5. | However there was no parking spaces in front of the house so we spend some time sechrching for a place |
parking
lack of -1
|
6. | Also the neighborhood was full of drunken people, probably it was because the house was not that far from downtown and beach and there was a lot of homeless people |
neighborhood
not safe -2
close to city center +1 close to beach +1 |
7. | Will not come back |
opinion
will not return -2
|
Use the SaaS version - pay per text (hosted via rapidAPI & Amazon AWS on a highly-scalable architecture)
Or see
We specialize in creating dedicated Language Understanding APIs for specific reviews or other user-generated content. We focused in travel, food, apps, surveys, profanity & toxicity detection and prepared a set of Language Understanding APIs for these domains. We also develop private custom-made Language Understanding APIs for any kind of text (reviews, comments, or other user-generated content). Contact Us to discuss what is possible, or Send Us Your Dataset to see how it works on your data.
We stand by the statement that we detect more information than Watson or than any other Deep Learning solution and we are more accurate than Google in a specific domain. We are also much more cost-effective. And we allow the processing of as much data as you want with no additional cost (0 cents / text).
This is possible because we created the next-generaion process of developing NLU systems in which we can build Semantic Models 10 x faster. It is designed from the ground up to process reviews and other user-generated content, not proper english texts. It is the result of 13 years of experience in NLP/NLU, 8 years in processing reviews, and more than 50 implementations (2007-2020) in companies, corporations, and startups of NLP/NLU systems and Language Understanding APIs to process reviews and other user-generated content.
We are passionate and proud of our technology and aim to provide the optimal technology for reviews and other user-generated content. We will be happy to help you implement your brilliant ideas and discover what is possible.
"Breakfast was tasty" | => | Breakfast: positive |
"Breakfast was huge" | => | Breakfast: positive |
"Breakfast was not included" | => | Breakfast: negative |
"Breakfast was very tasty but limited choices" | => | Breakfast: neutral |
"Breakfast was delicious but you had to pay extra" | => | Breakfast: neutral |
"Breakfast was tasty" | => | Breakfast: tasty |
"Breakfast was huge" | => | Breakfast: plenty options |
"Breakfast was not included" | => | Breakfast: not included |
"Breakfast was very tasty but limited choices" | => | Breakfast: tasty | poor, limited |
"Breakfast was delicious but you had to pay extra" | => | Breakfast: tasty | not included |
Language has colors. Do not reduce it to black & white.
Technological differences of Unicorn NLP Cognitive Learning solution (compared to other NLP/NLU/ML/DeepLearning solutions):
Other differences:
The semantic model is the newest approach to information extraction models where boundaries of those models are not determined by the structure of a language (or grammar, or used words), but the type of information you are detecting. It does not matter how the user writes an opinion about a specific object, e.g. “breakfast was a joke”, “Muffin and cold coffee is not a breakfast”, “Breakfast wasn’t included as written”, our semantic model still detects it and provides structured-data. With "breakfast" example, it is relatively easy to provide a shallow analysis (e.g. Sentiment Analysis) with the current NLP techniques. If you are looking for more sophisticated information (e.g. is it safe?, is it handicap friendly?, will someone come back?, Is it sustainable? Is it pet-friendly?) the situation gets more complicated. Current techniques provide a part of this information and their accuracy relies on the keywords used in reviews. Users are very creative in the reviews and they are not using a keyword approach. Let’s take one example. If we want to answer the question “is it safe?”, you need to capture a lot of information from reviews without common keywords safe/dangerous. In practice, they write this in tens or hundreds of ways: e.g. “there was a lot of drunken people outside”, “I did not feel good because of the neighbors”, “someone yelled in the night and the front desk lady did nothing”. We believe that if the human can understand what is written in the review, then technology should detect it as well. That is why we call them Semantic Models, not information extraction models.
Our Unicorn NLP Cognitive Learning Environment is a place where we design and precisely craft all the semantic models. It is a perfect combination of human and computer intelligence. It is the result of 13 years of experience in the NLP/NLU field and 50+ implementation in 2007-2020. It is the secret sauce of our technology, and it allows us to deliver the best technology in the world for reviews and other user-generated content. Statistical algorithms provide the data based on the processing of hundreds of thousands of reviews, but the humans are making all the crucial decisions. Human does not only annotate the data (like in machine learning approach), but also create semantic boundaries, semantic rules, domain-specific dictionaries, describe common domain-specific misspellings and grammar errors, and whatever it takes to achieve precision=90-95%. Given the pace of AI development, this is a significant advantage in information extraction (in NLP/NLU), and it will be appropriate for the next decade. Think about our environment as a way of creating tens of thousands of very precise, hand-crafted semantic rules but with an AI as an assistant rigorously validating new ideas and marking parts of algorithms and semantic code that needs improvement. AI assistant provides new ideas based on constant simulations on real data and tens of statistical tools.
We designed a unique semantic programming language (QL4Reviews) to extract information from reviews and other user-generated content. With this programming language, you build semantic models ten times faster than in other current technologies. Every Semantic Model is programmed in QL4Reviews and consists of 50-2000 lines of semantic code. Heavily optimized engine allows processing through 150+ Semantic Models on 1 medium-CPU Amazon AWS instance - 1 million reviews a day. The speed of the core engine was essential for us from the beginning. It helps us daily speed the process of development and make several iterations and validation on data a day.
Our technology is designed from the ground up to process reviews and other user-generated content, not proper-grammar texts. It's resistance to errors allows processing also machine-translated reviews/texts (via Google Translate or Microsoft Translate). This approach provides almost as good results as processing English reviews but there is no need to rewrite and maintain semantic models into other languages. Consequently, it is a much more cost-effective solution, and it is easier to maintain and scale (using Google Translation API allows you to process 105 languages on day one).
Read more & See demo: Processing machine-translated non-English Travel Reviews
This is not just another limited black box that you still need to configure to your data, and learn how to use it. You get dedicated Language Understanding API with a human-like accuracy, which is train&tested on your data. We redesigned every layer of technology and we wanted to address the biggest challenge in the NLP/NLU industry. We wanted to change the way you use and think about Natural Language Understanding API. In our opinion, if you are using NLP API, you should not get your hands dirty and spend inordinate time thinking about how to best implement this API into your application. We wanted to hide all the NLP complexity inside the system and provide you ready-to-use data. We believe that NLP API should be simple and easy to use without linguistic, machine learning or other expertise. You do not need to spend weeks on improving the accuracy of "domain-independent" NLP API, to adjust to Your domain or to make decisions about complex output parameters. Getting output parameters like confidence (e.g. 0.7852) or polarity (e.g. 0.4352) or keywords detected (e.g. friendly, 0.6786; disappointed, -0.7564) gives every developer problems and frankly, they are not standard or predictable. They are rather a consequence of used technology inside, and putting these parameters for the output in our opinion, is the way to leave configuration and responsibility for decision-making to You. We believe in simple human-like output where a human can understand what this API detected. We believe that we have developed excellent semantic models for specific domains, so you can get tailored results and focus on your business. We position ourselves as a read-to-use technology, not a standard API that is a tool that you need to configure and train to your data.
Use the SaaS version - pay per text (hosted via rapidAPI & Amazon AWS on a highly-scalable architecture)
If you want to get custom-made private API, or receive more info about the custom On-Premise version (with 0 cents/text),
or want to discuss what is possible
If you want to test it or see how it works on your data, send us a dataset
(we do not collect your data)