REPLACE COLUMNS can also be used to drop columns from the table's schema: Metadata is in an embedded Derby database whose disk storage location is determined by the Hive configuration variable named javax.jdo.option.ConnectionURL. The pattern matching follows Java regular expressions. Any branches with other names are feature branches for works-in-progress. They can also be specified in the projection clauses. This will result in the creation of a subdirectory named hive-x.y.z(where x.y.z is the release number): Set the environment variable HIVE_HOME to point to the installation directory: Finally, add $HIVE_HOME/bin to your PATH: The Hive GIT repository for the most recent Hive code is located here: git clonehttps://git-wip-us.apache.org/repos/asf/hive.git(the master branch). If you have an outbreak that isnt going away in a few days, you should see a doctor. HiveServer2 supports a command shell Beeline CLI that works with HiveServer2. We can run both batch and Interactive shell commands via CLI service which we will cover in the following sections. Error logs are very useful to debug problems. Shell 1 2 To do so, right-click on the offline registry you want to edit > click New > Key. How to Run a Mapping with Parameters from the Command Line Running a Mapping with a Parameter Set Running a Mapping with a Parameter File . Choose a password for your Beeline CLI account. Hive by default gets its configuration from, The location of the Hive configuration directory can be changed by setting the, Configuration variables can be changed by (re-)defining them in. Hadoop Hive is a piece of software that allows you to run large data sets on a cluster of computers. at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:254) Please send them with any bugs (of which there are many!) Hive OS is a linux-like operating system for GPU and ASIC mining. NO verification of data against the schema is performed by the load command. Hive command is a data warehouse infrastructure tool that sits on top Hadoop to summarize Big data. Hive Delete and Update Records Using ACID Transactions. This creates a new znode and associates the string "my_data" with the node. Start by downloading the most recent stable release of Hive from one of the Apache download mirrors (see Hive Releases). Download presto-cli-.279-executable.jar, rename it to presto , make it executable with chmod +x, then run it: ./presto --server localhost:8080 --catalog hive --schema default Run the CLI with the --help option to see the available options. It is logged at the INFO level of log4j, so you need to make sure that the logging at the INFO level is enabled (see. I hope that this server will be launched soon, despite the fact that it is not very popular. at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597) You dont have to be a fan of the game to get a chance to play it and be a part of this fun. Starting with release 0.7, Hive also supports a mode to run map-reduce jobs in local-mode automatically. Hive configuration can be manipulated by: Editing hive-site.xml and defining any desired variables (including Hadoop variables) in it. Now we can do some complex data analysis on the table u_data: Note that if you're using Hive 0.5.0 or earlier you will need to use COUNT(1) in place of COUNT(*). ins.style.minWidth = container.attributes.ezaw.value + 'px'; In this example the HiveServer2 process is running on By clicking Post Your Answer, you agree to our terms of service, privacy policy and cookie policy. The files youre looking for can be changed according to your infrastructure. and, restart the HiveServer2 and try to run the beeline command again. You can connect to a different database by using the first command line, such as hive db=/usr/share/cloudxlab/hives/mydb. 1. Next, verify the database is created by running the show command: 3. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Edit: I have 4 Databases in MySQL: Information_schema, hive, mysql, test. Start the DataNode on New Node Datanode daemon should be started manually using $HADOOP_HOME/bin/hadoop-daemon.sh script. Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type. Give the connection alias a name in the 'Name' input box. Loads a file that contains two columns separated by ctrl-a into pokes table. Run Scala code with spark-submit. MySql Server 5.6 . I did the following changes and hive metastore and hive works: SET PASSWORD FOR 'hive'@'sandbox.hortonworks.com' = PASSWORD('password');
rev2023.3.3.43278. The default logging level is WARN for Hive releases prior to 0.13.0. Evaluate Confluence today. For example, we can use "derby" as db type. The consent submitted will only be used for data processing originating from this website. The server-command tool provides access to dozens of server operations ranging from user management, system maintenance, account manipulation and printer control. Add a comment. Check out this link for documentation http://java.sun.com/javase/6/docs/api/java/util/regex/Pattern.html. It is designed to scale up from a single server to thousands of servers, each offering local computation and storage. if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'sparkbyexamples_com-medrectangle-4','ezslot_8',109,'0','0'])};__ez_fad_position('div-gpt-ad-sparkbyexamples_com-medrectangle-4-0');if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'sparkbyexamples_com-medrectangle-4','ezslot_9',109,'0','1'])};__ez_fad_position('div-gpt-ad-sparkbyexamples_com-medrectangle-4-0_1'); .medrectangle-4-multi-109{border:none !important;display:block !important;float:none !important;line-height:0px;margin-bottom:15px !important;margin-left:auto !important;margin-right:auto !important;margin-top:15px !important;max-width:100% !important;min-height:250px;min-width:250px;padding:0;text-align:center !important;}. Why Hive Table is loading with NULL values? Also it will be good to check if the correct mysql is being used by metastore (like the local mysql or is it configured to use remote mysql) Reply 2,517 Views 0 Kudos warthi Explorer Created 07-27-2017 08:32 AM @Jay SenSharma Use the following commands to start beeline and connect to a running HiveServer2 process. export PROJECT=$ (gcloud info. In branch-1, Hive supports both Hadoop 1.x and 2.x. It is based on the SQLLine CLI written by Marc Prud'hommeaux. at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1300) To list out the databases in Hive warehouse, enter the command 'show databases'. You can also start Hive server HS2 (HiveServer2) using hive --service command.if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'sparkbyexamples_com-box-4','ezslot_12',139,'0','0'])};__ez_fad_position('div-gpt-ad-sparkbyexamples_com-box-4-0');if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'sparkbyexamples_com-box-4','ezslot_13',139,'0','1'])};__ez_fad_position('div-gpt-ad-sparkbyexamples_com-box-4-0_1'); .box-4-multi-139{border:none !important;display:block !important;float:none !important;line-height:0px;margin-bottom:15px !important;margin-left:auto !important;margin-right:auto !important;margin-top:15px !important;max-width:100% !important;min-height:250px;min-width:250px;padding:0;text-align:center !important;}. I run the following command: An example of data being processed may be a unique identifier stored in a cookie. If you get a connection error User: is not allowed to impersonate, as shown below. at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073) See HiveServer2 Logging for configuration. 08:46 AM The reason why we want to start the hive-metastore is, that the port 9083 is not listening on our Server. creates a table called invites with two columns and a partition column called ds. Remote clients to execute queries against the Hive server. Its a JDBC client that is based on the SQLLine CLI. $ bin/beeline --hiveconf x1=y1 --hiveconf x2=y2 //this sets client-side variables x1 and x2 to y1 and y2 respectively. Follow these steps to start different components of Hive on a node: Run Hive CLI: $HIVE_HOME/bin/hive Run HiveServer2 and Beeline: $HIVE_HOME/bin/hiveserver2 $HIVE_HOME/bin/beeline -u jdbc:Hive2://$HiveServer2_HOST:$HiveServer2_PORT Run HCatalog and start up the HCatalog server: $HIVE_HOME/hcatalog/sbin/hcat_server.sh Run the HCatalog CLI: HiveServer2 by default provides user scott and password tiger, so let's use these default credentials. Then, the worker bees will need to be placed in the hive box. mc - file manager, similar to Norton Commander, but for Linux. The region and polygon don't match. * to 'hive'@'localhost' identified by'changeme'; . Several of the most memorable games from the past are being re-released, and the team has been hard at work on a project to do so. Site design / logo 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA. A copy of the Apache License Version 2.0 can be found here. while loop. at com.mysql.jdbc.JDBC4Connection. The Hive JDBC host should be specified for Hive Beeline. Euler: A baby on his lap, a cat on his back thats how he wrote his immortal works (origin?). Go to the command line of the Hive server and start hiveserver2 docker exec -it hive-server bash hiveserver2 Maybe a little check that something is listening on port 10000 now netstat -anp | grep 10000 tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 446/java Okay. This will start the Hive terminal, which can be used to issue HiveQL commands. Or to start Beeline and HiveServer2 in the same process for testing purpose, for a similar user experience to HiveCLI: To run the HCatalog server from the shell in Hive release 0.11.0 and later: To use the HCatalog command line interface (CLI) in Hive release 0.11.0 and later: For more information, see HCatalog Installation from Tarball and HCatalog CLI in the HCatalog manual. Starting with Hive 1.1.0, EXPLAIN EXTENDED output for queries can be logged at the INFO level by setting thehive.log.explain.output property to true. By default, it's set to zero, in which case Hive lets Hadoop determine the default memory limits of the child jvm. Acidity of alcohols and basicity of amines. Technical strengths include Hadoop, YARN, Mapreduce, Hive, Sqoop, Flume, Pig, HBase, Phoenix, Oozie, Falcon, Kafka, Storm, Spark, MySQL and Java. The compiler has the ability to transform information in the backend. HiveServer2 tries to communicate with the metastore as part of its initialization bootstrap. Common Spark command line. HiveServer2 a.k.a HS2 is a second-generation Hive server that enablesif(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[728,90],'sparkbyexamples_com-medrectangle-3','ezslot_4',156,'0','0'])};__ez_fad_position('div-gpt-ad-sparkbyexamples_com-medrectangle-3-0'); Before you proceed to start HiveServer2, make sure you have created the Hive Metastore and data warehouse location and able to run Hive CLI. Share Improve this answer Follow answered Jul 2, 2015 at 12:34 dimamah 2,865 17 31 Add a comment 5 Use below command to run hiveserver2 : sudo su hduser cd /usr/local/apache-hive-2..1-bin/bin hive --service hiveserver2 Share Improve this answer Follow Spark Home. Dorian Kingi; August 16, 2022; 0; To develop a modern web application, you need to have skills both in creating the server side and the client side. Go to Hive shell by giving the command sudo hive and enter the command 'create database<data base name>' to create the new database in the Hive. Option: Explanation . To list out the databases in Hive warehouse, enter the command ' show databases'. 01-24-2021 HDFS is the primary or major component of the Hadoop ecosystem which is responsible for storing large data sets of structured or unstructured data across various nodes and thereby maintaining the metadata in the form of log files. Apache Hives archive is named Apache-hive-0.0.4-bin.tar.gz, and we have it here. If we try to start the hive-metastore service with this command: Caused by: java.sql.SQLException: Access denied for user 'hive'@'sandbox.hortonworks.com' (using password: YES) Java 11: the truststore format has changed to PKCS and the truststore password is required; otherwise, the connection fails. They are available in build/dist/examples/queries.More are available in the Hive sources at ql/src/test/queries/positive. (BoneCP.java:305) Then, add the driver to the Hive library path ( /usr/lib/hive/lib ). Is a collection of years plural or singular? if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[580,400],'sparkbyexamples_com-large-leaderboard-2','ezslot_11',114,'0','0'])};__ez_fad_position('div-gpt-ad-sparkbyexamples_com-large-leaderboard-2-0'); HiveServer2 supports a command shell Beeline CLI that works with HiveServer2. To start a hive, you will need to purchase a queen bee and some worker bees. Hive Temporary Table Usage And How to Create? Hive Relational | Arithmetic | Logical Operators. at com.mysql.jdbc.ConnectionImpl. To store Metastore data, create a directory named data in the $DERBY_HOME directory. Follow the below steps to launch the hive Step 1: Start all your Hadoop Daemon start-dfs.sh # this will start namenode, datanode and secondary namenode start-yarn.sh # this will start node manager and resource manager jps # To check running daemons Step 2: Launch Hive hive Let's discuss the hive one-shot commands -e option/mode There is a lot of. Inhalation of an allergen can be triggered by a variety of factors, causing the rash. Check if the HiveServer2 service is running and listening on port 10000 using netstat command. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) Warning: Using a password on the command line interface can be insecure. Start a Hive shell as the hive user You can start the Hive shell, which uses Beeline in the background, to enter statements on the command line of a node in a cluster. The frames will need to be placed in the hive box last. To use the HDFS commands, first you need to start the Hadoop services using the following command: sbin/start-all.sh. Recovering from a blunder I made while emailing a professor. var slotId = 'div-gpt-ad-sparkbyexamples_com-box-3-0'; I get following error-message if i try to start the hive-metastore manually: The error occurs at the last of 4 steps of starting the Hive-services. Hive What is Metastore and Data Warehouse Location? List. (adsbygoogle = window.adsbygoogle || []).push({}); If you are using HiveServer2 on a cluster that does have Kerberos security enabled, see HiveServer2 Security To install Hadoops, the following steps must be followed. Connect to mysql and execute the next command to change the hive user password (original password is encrypted and unknown) for "password": SET PASSWORD FOR 'hive'@'sandbox.hortonworks.com' = PASSWORD ('password'); Add the following to hive-site.xml <property> <name>javax.jdo.option.ConnectionPassword</name> <value>password</value> </property> Starting the Hive Server: The first step is to format the Namenode. ins.dataset.adChannel = cid; What is the point of Thrower's Bandolier? What sort of strategies would a medieval military use against a fantasy giant? Hive also stores query logs on a per Hive session basis in /tmp//, but can be configured in hive-site.xml with the hive.querylog.location property. Start the Hive services using the command 'hive --service hiveserver2' Connect to the Hive services using a command line client such as 'beeline'. Hive by default provides user scott and password tiger. Hive compiler generates map-reduce jobs for most queries. Categories: Beeline client | Hive | HiveServer2 | How To | Starting and Stopping | All Categories, United States: +1 888 789 1488 Next you need to unpack the tarball. This will start hiveserver2, on port 10000 and output the logs to console. The Hive website contains additional information on games, features, and updates. Do I need a thermal expansion tank if I already have a pressure tank? HiveServer2 is an improved version of HiveServer that supports Kerberos authentication and multi-client concurrency. Hive distribution comes with hiveserver2 which is located at $HIVE_HOME/bin/ directory, run this command without any arguments to start the HiveServer2. Planning a New Cloudera Enterprise Deployment, Step 1: Run the Cloudera Manager Installer, Migrating Embedded PostgreSQL Database to External PostgreSQL Database, Storage Space Planning for Cloudera Manager, Manually Install Cloudera Software Packages, Creating a CDH Cluster Using a Cloudera Manager Template, Step 5: Set up the Cloudera Manager Database, Installing Cloudera Navigator Key Trustee Server, Installing Navigator HSM KMS Backed by Thales HSM, Installing Navigator HSM KMS Backed by Luna HSM, Uninstalling a CDH Component From a Single Host, Starting, Stopping, and Restarting the Cloudera Manager Server, Configuring Cloudera Manager Server Ports, Moving the Cloudera Manager Server to a New Host, Migrating from PostgreSQL Database Server to MySQL/Oracle Database Server, Starting, Stopping, and Restarting Cloudera Manager Agents, Sending Usage and Diagnostic Data to Cloudera, Exporting and Importing Cloudera Manager Configuration, Modifying Configuration Properties Using Cloudera Manager, Viewing and Reverting Configuration Changes, Cloudera Manager Configuration Properties Reference, Starting, Stopping, Refreshing, and Restarting a Cluster, Virtual Private Clusters and Cloudera SDX, Compatibility Considerations for Virtual Private Clusters, Tutorial: Using Impala, Hive and Hue with Virtual Private Clusters, Networking Considerations for Virtual Private Clusters, Backing Up and Restoring NameNode Metadata, Configuring Storage Directories for DataNodes, Configuring Storage Balancing for DataNodes, Preventing Inadvertent Deletion of Directories, Configuring Centralized Cache Management in HDFS, Configuring Heterogeneous Storage in HDFS, Enabling Hue Applications Using Cloudera Manager, Post-Installation Configuration for Impala, Configuring Services to Use the GPL Extras Parcel, Tuning and Troubleshooting Host Decommissioning, Comparing Configurations for a Service Between Clusters, Starting, Stopping, and Restarting Services, Introduction to Cloudera Manager Monitoring, Viewing Charts for Cluster, Service, Role, and Host Instances, Viewing and Filtering MapReduce Activities, Viewing the Jobs in a Pig, Oozie, or Hive Activity, Viewing Activity Details in a Report Format, Viewing the Distribution of Task Attempts, Downloading HDFS Directory Access Permission Reports, Troubleshooting Cluster Configuration and Operation, Authentication Server Load Balancer Health Tests, Impala Llama ApplicationMaster Health Tests, Navigator Luna KMS Metastore Health Tests, Navigator Thales KMS Metastore Health Tests, Authentication Server Load Balancer Metrics, HBase RegionServer Replication Peer Metrics, Navigator HSM KMS backed by SafeNet Luna HSM Metrics, Navigator HSM KMS backed by Thales HSM Metrics, Choosing and Configuring Data Compression, YARN (MRv2) and MapReduce (MRv1) Schedulers, Enabling and Disabling Fair Scheduler Preemption, Creating a Custom Cluster Utilization Report, Configuring Other CDH Components to Use HDFS HA, Administering an HDFS High Availability Cluster, Changing a Nameservice Name for Highly Available HDFS Using Cloudera Manager, MapReduce (MRv1) and YARN (MRv2) High Availability, YARN (MRv2) ResourceManager High Availability, Work Preserving Recovery for YARN Components, MapReduce (MRv1) JobTracker High Availability, Cloudera Navigator Key Trustee Server High Availability, Enabling Key Trustee KMS High Availability, Enabling Navigator HSM KMS High Availability, High Availability for Other CDH Components, Navigator Data Management in a High Availability Environment, Configuring Cloudera Manager for High Availability With a Load Balancer, Introduction to Cloudera Manager Deployment Architecture, Prerequisites for Setting up Cloudera Manager High Availability, High-Level Steps to Configure Cloudera Manager High Availability, Step 1: Setting Up Hosts and the Load Balancer, Step 2: Installing and Configuring Cloudera Manager Server for High Availability, Step 3: Installing and Configuring Cloudera Management Service for High Availability, Step 4: Automating Failover with Corosync and Pacemaker, TLS and Kerberos Configuration for Cloudera Manager High Availability, Port Requirements for Backup and Disaster Recovery, Monitoring the Performance of HDFS Replications, Monitoring the Performance of Hive/Impala Replications, Enabling Replication Between Clusters with Kerberos Authentication, How To Back Up and Restore Apache Hive Data Using Cloudera Enterprise BDR, How To Back Up and Restore HDFS Data Using Cloudera Enterprise BDR, Migrating Data between Clusters Using distcp, Copying Data between a Secure and an Insecure Cluster using DistCp and WebHDFS, Using S3 Credentials with YARN, MapReduce, or Spark, How to Configure a MapReduce Job to Access S3 with an HDFS Credstore, Importing Data into Amazon S3 Using Sqoop, Configuring ADLS Access Using Cloudera Manager, Importing Data into Microsoft Azure Data Lake Store Using Sqoop, Configuring Google Cloud Storage Connectivity, How To Create a Multitenant Enterprise Data Hub, Configuring Authentication in Cloudera Manager, Configuring External Authentication and Authorization for Cloudera Manager, Step 2: Install JCE Policy Files for AES-256 Encryption, Step 3: Create the Kerberos Principal for Cloudera Manager Server, Step 4: Enabling Kerberos Using the Wizard, Step 6: Get or Create a Kerberos Principal for Each User Account, Step 7: Prepare the Cluster for Each User, Step 8: Verify that Kerberos Security is Working, Step 9: (Optional) Enable Authentication for HTTP Web Consoles for Hadoop Roles, Kerberos Authentication for Non-Default Users, Managing Kerberos Credentials Using Cloudera Manager, Using a Custom Kerberos Keytab Retrieval Script, Using Auth-to-Local Rules to Isolate Cluster Users, Configuring Authentication for Cloudera Navigator, Cloudera Navigator and External Authentication, Configuring Cloudera Navigator for Active Directory, Configuring Groups for Cloudera Navigator, Configuring Authentication for Other Components, Configuring Kerberos for Flume Thrift Source and Sink Using Cloudera Manager, Using Substitution Variables with Flume for Kerberos Artifacts, Configuring Kerberos Authentication for HBase, Configuring the HBase Client TGT Renewal Period, Using Hive to Run Queries on a Secure HBase Server, Enable Hue to Use Kerberos for Authentication, Enabling Kerberos Authentication for Impala, Using Multiple Authentication Methods with Impala, Configuring Impala Delegation for Hue and BI Tools, Configuring a Dedicated MIT KDC for Cross-Realm Trust, Integrating MIT Kerberos and Active Directory, Hadoop Users (user:group) and Kerberos Principals, Mapping Kerberos Principals to Short Names, Configuring TLS Encryption for Cloudera Manager and CDH Using Auto-TLS, Manually Configuring TLS Encryption for Cloudera Manager, Manually Configuring TLS Encryption on the Agent Listening Port, Manually Configuring TLS/SSL Encryption for CDH Services, Configuring TLS/SSL for HDFS, YARN and MapReduce, Configuring Encrypted Communication Between HiveServer2 and Client Drivers, Configuring TLS/SSL for Navigator Audit Server, Configuring TLS/SSL for Navigator Metadata Server, Configuring TLS/SSL for Kafka (Navigator Event Broker), Configuring Encrypted Transport for HBase, Data at Rest Encryption Reference Architecture, Resource Planning for Data at Rest Encryption, Optimizing Performance for HDFS Transparent Encryption, Enabling HDFS Encryption Using the Wizard, Configuring the Key Management Server (KMS), Configuring KMS Access Control Lists (ACLs), Migrating from a Key Trustee KMS to an HSM KMS, Migrating Keys from a Java KeyStore to Cloudera Navigator Key Trustee Server, Migrating a Key Trustee KMS Server Role Instance to a New Host, Configuring CDH Services for HDFS Encryption, Backing Up and Restoring Key Trustee Server and Clients, Initializing Standalone Key Trustee Server, Configuring a Mail Transfer Agent for Key Trustee Server, Verifying Cloudera Navigator Key Trustee Server Operations, Managing Key Trustee Server Organizations, HSM-Specific Setup for Cloudera Navigator Key HSM, Integrating Key HSM with Key Trustee Server, Registering Cloudera Navigator Encrypt with Key Trustee Server, Preparing for Encryption Using Cloudera Navigator Encrypt, Encrypting and Decrypting Data Using Cloudera Navigator Encrypt, Converting from Device Names to UUIDs for Encrypted Devices, Configuring Encrypted On-disk File Channels for Flume, Installation Considerations for Impala Security, Add Root and Intermediate CAs to Truststore for TLS/SSL, Authenticate Kerberos Principals Using Java, Configure Antivirus Software on CDH Hosts, Configure Browser-based Interfaces to Require Authentication (SPNEGO), Configure Browsers for Kerberos Authentication (SPNEGO), Configure Cluster to Use Kerberos Authentication, Convert DER, JKS, PEM Files for TLS/SSL Artifacts, Obtain and Deploy Keys and Certificates for TLS/SSL, Set Up a Gateway Host to Restrict Access to the Cluster, Set Up Access to Cloudera EDH or Altus Director (Microsoft Azure Marketplace), Using Audit Events to Understand Cluster Activity, Configuring Cloudera Navigator to work with Hue HA, Cloudera Navigator support for Virtual Private Clusters, Encryption (TLS/SSL) and Cloudera Navigator, Limiting Sensitive Data in Navigator Logs, Preventing Concurrent Logins from the Same User, Enabling Audit and Log Collection for Services, Monitoring Navigator Audit Service Health, Configuring the Server for Policy Messages, Using Cloudera Navigator with Altus Clusters, Configuring Extraction for Altus Clusters on AWS, Applying Metadata to HDFS and Hive Entities using the API, Using the Purge APIs for Metadata Maintenance Tasks, Troubleshooting Navigator Data Management, Files Installed by the Flume RPM and Debian Packages, Configuring the Storage Policy for the Write-Ahead Log (WAL), Using the HBCK2 Tool to Remediate HBase Clusters, Exposing HBase Metrics to a Ganglia Server, Configuration Change on Hosts Used with HCatalog, Accessing Table Information with the HCatalog Command-line API, Unable to connect to database with provided credential, Unknown Attribute Name exception while enabling SAML, Downloading query results from Hue takes long time, 502 Proxy Error while accessing Hue from the Load Balancer, Hue Load Balancer does not start after enabling TLS, Unable to kill Hive queries from Job Browser, Unable to connect Oracle database to Hue using SCAN, Increasing the maximum number of processes for Oracle database, Unable to authenticate to Hbase when using Hue, ARRAY Complex Type (CDH 5.5 or higher only), MAP Complex Type (CDH 5.5 or higher only), STRUCT Complex Type (CDH 5.5 or higher only), VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP, Configuring Resource Pools and Admission Control, Managing Topics across Multiple Kafka Clusters, Setting up an End-to-End Data Streaming Pipeline, Kafka Security Hardening with Zookeeper ACLs, Configuring an External Database for Oozie, Configuring Oozie to Enable MapReduce Jobs To Read/Write from Amazon S3, Configuring Oozie to Enable MapReduce Jobs To Read/Write from Microsoft Azure (ADLS), Starting, Stopping, and Accessing the Oozie Server, Adding the Oozie Service Using Cloudera Manager, Configuring Oozie Data Purge Settings Using Cloudera Manager, Dumping and Loading an Oozie Database Using Cloudera Manager, Adding Schema to Oozie Using Cloudera Manager, Enabling the Oozie Web Console on Managed Clusters, Scheduling in Oozie Using Cron-like Syntax, Installing Apache Phoenix using Cloudera Manager, Using Apache Phoenix to Store and Access Data, Orchestrating SQL and APIs with Apache Phoenix, Creating and Using User-Defined Functions (UDFs) in Phoenix, Mapping Phoenix Schemas to HBase Namespaces, Associating Tables of a Schema to a Namespace, Understanding Apache Phoenix-Spark Connector, Understanding Apache Phoenix-Hive Connector, Using MapReduce Batch Indexing to Index Sample Tweets, Near Real Time (NRT) Indexing Tweets Using Flume, Using Search through a Proxy for High Availability, Enable Kerberos Authentication in Cloudera Search, Flume MorphlineSolrSink Configuration Options, Flume MorphlineInterceptor Configuration Options, Flume Solr UUIDInterceptor Configuration Options, Flume Solr BlobHandler Configuration Options, Flume Solr BlobDeserializer Configuration Options, Solr Query Returns no Documents when Executed with a Non-Privileged User, Installing and Upgrading the Sentry Service, Configuring Sentry Authorization for Cloudera Search, Synchronizing HDFS ACLs and Sentry Permissions, Authorization Privilege Model for Hive and Impala, Authorization Privilege Model for Cloudera Search, Frequently Asked Questions about Apache Spark in CDH, Developing and Running a Spark WordCount Application, Accessing Data Stored in Amazon S3 through Spark, Accessing Data Stored in Azure Data Lake Store (ADLS) through Spark, Accessing Avro Data Files From Spark SQL Applications, Accessing Parquet Files From Spark SQL Applications, Building and Running a Crunch Application with Spark.