Welcome to AssignmentCache!

Oracle

Need Help in Oracle Assignment?
We can help you if you are having difficulty with your Oracle Assignment. Just email your Oracle Assignment at admin@assignmentcache.com.
We provide help for students all over the world in Oracle Assignment.

Items 1 to 10 of 36 total

per page
Page:
  1. 1
  2. 2
  3. 3
  4. 4

Grid  List 

Set Descending Direction
  1. DBM449 LAB1 SqlFile

    DBM 449 LAB 1 Oracle Joins

    $20.00

    GENERAL OVERVIEW
    Scenario/Summary
    My colleague, Ann Henry, operates a regional training center for a commercial software organization. She created a database to track client progress so she can analyze effectiveness of the certification program. CLIENT, COURSE, and COURSE_ACTIVITY are three of the tables in her database. The CLIENT table contains client name, company, client number, pre-test score, certification program and email address. The COURSE_ACTIVITY table contains client number, course code, grade, and instructor notes. The COURSE table contains the course code, course name, instructor, course date, and location. Although she and her instructors enter much of the data themselves, some of the data are extracted from the corporate database and loaded into her tables.

    Loading the initial data was easy. For grade entry at the end of each course, a former employee created a data entry form for the instructors. Updating most client information and generating statistics on client progress is not easy because Ann does not know much SQL. For now, she exports the three tables into three spreadsheets. To look up a grade in the COURSE_ACTIVITY spreadsheet, she first has to look up client number in the CLIENT spreadsheet. While this is doable, it is certainly not practical. For statistics, she sorts the data in the COURSE_ACTIVITY spreadsheet using multiple methods to get the numbers she needs.

    Every month, Ann's database tables need to be refreshed to reflect changes in the corporate database. Ann describes this unpleasant task. She manually compares the contents of newly extracted data from corporate to the data in her spreadsheets, copies in the new values, and then replaces the database contents with the new values.

    Ann needs our help. Let’s analyze her situation and determine what advanced SQL she could use to make her tasks easier.
         
    L A B O V E R V I E W

    Scenario/Summary

    The purpose of this lab is to explore join operators to determine which, if any, are appropriate for solving Ann's business problems, as described in this week's lecture.

    Since Ann prefers to work from Excel spreadsheets, she wants her CLIENT and COURSE_ACTIVITY tables exported into one spreadsheet rather than two, as she is currently using. We need to determine which, if any, of the join operators will provide the data she wants for the single spreadsheet. (Note: we will not perform the export, just determine how to retrieve the necessary data.) Using the spreadsheet, she will be able to determine:

    1. Which course(s) a specific client has taken
    2. What grade(s) a specific client has earned in a specific course
    3. Which clients did not take any courses
    4. Which courses were not taken by any client

    Here are results from DESCRIBE commands that show structure (columns and their data types) of tables CLIENT and COURSE_ACTIVITY. You may refer to it while constructing your queries.

    SQL*Plus: Release 10.2.0.1.0 — Production on Thu Jun 14 22:38:52 2007

    Copyright (c) 1982, 2005, Oracle.  All rights reserved.

    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 — 64bit Production
    With the Partitioning, OLAP and Data Mining Options

    SQL> desc course_activity


    SQL>

    For this lab you will be creating several documents. First, write your queries in Notepad to create a script file that will contain all of the queries asked for in lab steps 4 through 13. You can (and should) test each query as you write it to make sure that it works and is returning the correct data. Once you have all of your queries written then create a SPOOL session and run your entire script file. Be sure that you execute a SET ECHO ON session command before running the file so that both the query and the output will be captured in the SPOOL file. IMPORTANT: If you are using Windows Vista you will need to create a directory on your C: drive to SPOOL your file into. Vista will not allow you to write a file directly to the C: drive. This will give you two files for the lab. The third file will the be the Lab1 Report document found in Doc Sharing. You will need to put your responses to the questions asked in the various lab steps.

    Now let's begin.

    L A B S T E P S

    STEP 1: Start Oracle SQL*Plus via Citrix

    Log into the Citrix iLab environment. Open your Oracle folder, select SQL Plus and log in to your database instance. Use "sys" as User Name, and "oracle" as the Password. Enter the Host String as "DB9999.world as sysdba" where 9999 is the database number you have been assigned.

    STEP 2: Initialize tables

    Download the pupbld.sql and Lab1_init.sql files associated with the links to your C: drive or to the F: drive in your Citrix environment. You will need to open each of the files and edit the connection string to reflect your instance name. The pupbld.sql file has two connections strings; one at the top of the script and another at the bottom. Be sure to change both of these to reflect your instance name.

    Once you have done this then run the pupbld.sql script first (DO NOT copy and paste it) in your SQL*Plus session. The script will create the product_user_profile synonym in the SYSTEM account which will be used each time you log in as a normal user.

    Next run the lab1_init.sql script in your session. The script will create a new user (DBM449_USER) that will be used in various labs in this course. You can find the password for this new user by looking at the CREATE USER statement in the script file. Disregard the DROP TABLE error messages. They occur because the script is designed to work regardless of whether you have already created the tables or not. This way, you may run it if you ever decide to resent the contents of your tables to the original values. When you run the script for the first time, the error messages appear as you attempt to drop tables that do not exist.

    Once the script has finished you will be logged into the new user and ready to start your lab.

    STEP 3: Verify your tables

    You want to verify that everything completed successfully. To do this execute a SELECT * FROM TAB statement to make sure all 5 tables were created and then you can execute a SELECT COUNT(*) FROM statement using each of the table names. You should find the following numbers of records for each table.

    • CLIENT table - 5 rows
    • COURSE table - 5 rows
    • COURSE_ACTIVITY table - 6 rows
    • CORP_EXTRACT1 table - 3 rows
    • CORP_EXTRACT2 table - 0 rows

    NOTE: In the following steps when writing your queries be sure to list the tables in the FROM clause in the same order they are listed in the instructions. Reversing the order of the tables in the FROM clause will produce an incorrect results set

    STEP 4: Using the FULL OUTER JOIN operator

    Join the CLIENT and COURSE_ACTIVITY tables using a FULL OUTER JOIN.

    • Write and execute the SQL statement that produces the client number and name, course code and grade that the client got in this course.

    Will the FULL OUTER JOIN be helpful to Ann? Place your response in the lab report document for this step.

    STEP 5: Using the RIGHT OUTER JOIN operator

    Join the CLIENT and COURSE_ACTIVITY tables using a RIGHT OUTER JOIN.

    • Write and execute the SQL statement that produces the client number and name, course code and grade that the client got in this course.

    Will the RIGHT OUTER JOIN be helpful to Ann? Place your response in the lab report document for this step.

    STEP 6: Using the LEFT OUTER JOIN operator

    Join the CLIENT and COURSE_ACTIVITY tables using a LEFT OUTER JOIN.

    • Write and execute the SQL statement that produces the client number and name, course code and grade that the client got in this course.

    Will the LEFT OUTER JOIN be helpful to Ann? Place your response in the lab report document for this step.

    STEP 7: Using the NATURAL JOIN operator

    Join the CLIENT and COURSE_ACTIVITY tables using a NATURAL JOIN.

    • Write and execute the SQL statement that produces the client number and name, course code and grade that the client got in this course.
    • Will the NATURAL JOIN be helpful to Ann? Place your response in the lab report document for this step.

    STEP 8: Using the INNER JOIN operator

    Join the CLIENT and COURSE_ACTIVITY tables using a INNER JOIN.
    Write and execute the SQL statement that produces the client number and name, course code and grade that the client got in this course.

    Will the INNER JOIN be helpful to Ann? Place your response in the lab report document for this step.

    Write a conclusion based on the five steps above, which join - if any - should Ann use to populate the spreadsheet that can answer her questions.

    STEP 9: Using the UNION operator 

    Examine the clients and courses in Ann’s tables and the CORP_EXTRACT1 table using the UNION operator.

    • Write and execute the SQL statement that examines client numbers in CLIENT and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines client numbers in COURSE_ACTIVITY and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines course names in COURSE and CORP_EXTRACT1.

    Which of these statements, if any, will be helpful to Ann? Place your response in the lab report document for this step.

    STEP 10: Using the UNION ALL operator

    Examine the clients and courses in Ann’s tables and the CORP_EXTRACT1 table using the UNION ALL operator.

    • Write and execute the SQL statement that examines client numbers in CLIENT and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines client numbers in COURSE_ACTIVITY and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines course names in COURSE and CORP_EXTRACT1.

    Which of these statements, if any, will be helpful to Ann? Place your response in the lab report document for this step.

    STEP 11: Using the INTERSECT operator

    Examine the clients and courses in Ann’s tables and the CORP_EXTRACT1 table using the INTERSECT operator.

    • Write and execute the SQL statement that examines client numbers in CLIENT and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines client numbers in COURSE_ACTIVITY and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines course names in COURSE and CORP_EXTRACT1.

    Which of these statements, if any, will be helpful to Ann? Place your response in the lab report document for this step.

    STEP 12: Using the MINUS operator

    Examine the clients and courses in Ann’s tables and the CORP_EXTRACT1 table using the MINUS operator.

    • Write and execute the SQL statement that examines client numbers in CLIENT and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines client numbers in COURSE_ACTIVITY and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines course names in COURSE and CORP_EXTRACT1.

    Which of these statements, if any, will be helpful to Ann? Place your response in the lab report document for this step.

    STEP 13: Using subqueries

    Examine the clients and courses in Ann’s tables and the CORP_EXTRACT1 table using a subquery with NOT IN operator.

    • Write and execute the SQL statement that examines client numbers in CLIENT and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines client numbers in COURSE_ACTIVITY and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines course names in COURSE and CORP_EXTRACT1.

    Which of these statements, if any, will be helpful to Ann? Place your response in the lab report document for this step.

    Deliverables
        
    What is Due

    Submit your spooled lab file with the queries and results sets along with the completed Lab 1 Report to the Dropbox. Your report should contain copies of each query and result set outlined in the lab along with the requested explanation of whether or not it satisfied the business requirement outlined for that particular section of the lab.

    Learn More
  2. DBM 449 Lab 2 Sql File

    DBM 449 lab 2 OEM Query optimization

    $20.00

    In this lab we will focus on several common performance tuning issues that one might encounter while working with a database.  You will need to refer to both your text book and the lecture material for this week for examples and direction to complete this lab.
    To record your work for this lab use the LAB2_Report.doc found in Doc Sharing. As in your first lab you will need to copy/paste your SQL statements and results from SQL*Plus into this document. This will be the main document you submit to the Drop Box for Week 2.

    L A B   S T E P S   
    STEP 1: Examine Query Optimization using OEM

    Oracle Enterprise Manager (OEM) provides a graphical tool for query optimization.  The tables that you will be using in this lab are the same ones that were created in the first lab in the DBM449_USER schema.

    1. Start OEM via Citrix iLab. If you need help or instructions on how to do this you can refer to the How_to_use_OEM_in_Citrix iLab.pdf file associated with this link.
    2. Select Database Tools icon from the vertical tool bar and Select SQL Scratchpad icon from the expanded tool bar. If you need help or instructions on how to do this your can refer to theExecuting_and_Analyzing_Queries_in_OEM.pdf file associated with this link.
    3. Write a SQL statement to query all data from table COURSE (you will need to connect as the DBM449_USER). Click on Execute. Take a screen shot that shows the results and paste that into the lab document.
    4. Click on Explain Plan. Take screen shot of the results and past that into the lab document.
    5. Write a comment how this query is executed.
    6. Write a SQL statement to query the course_name, client_name and grade from the COURSE, COURSE_ACTIVITY and CLIENT tables and order the results by course name, and within the same course by client name.
    7. Click on Explain Plan. Take screen shot of the results and past that into the lab document.
    8. Exit out of OEM at this point.
    9. Write a comment on how this query is executed.

    STEP 2: Examine Query Optimization using SQL*Plus

    In this portion of the lab we are going to use SQL*Plus to replicate what we did in Step one using OEM.  At the end of this part of the lab you will be asked to compare the results between the processes.

    1. Before you can analyze an SQL statement in SQL*Plus you first need to create a Plan Table that will hold the results of your analysis.  To do this you will need to download the UTLXPLAN.SQL file associated with this link and run this script in an SQL*Plus session while logged in as the DBM449_USER user.  Once the script has completed then execute a DESC command on the PLAN_TABLE.
    2. Again you are going to write a SQL statement to query all data from table COURSE.  Remember to make the modifications to the query so that it will utilize the plan table that you just created.
    3. Now write the query that will create a results table similar to the one below by using the DBMS_XPLAN procedure.

    PLAN_TABLE_OUTPUT
    Plan hash value:  1263998123
         
      Id      Operation      Name      Rows      Bytes      Cost (%CPU)      Time
      0      SELECT STATEMENT          5      345      3    (0)      00:00:01
      1      TABLE ACCESSFULL      COURSE      5      345      3    (0)      00:00:01
        
    Note
    PLAN_TABLE_OUTPUT
    - dynamic spamling used for this statement

    1. Now execute the second query you used in Step 1 and then show the results in the plan table for that query.  HINT: Before you run your second query you will need to delete the contents of the plan table so that you will get a clean analysis.
    2. Write a short paragraph comparing the output from OEM to the output from the EXPLAIN PLAN process you just ran.  Be sure to copy/paste all of the queries and results set from this step into the lab report section for this step.

    STEP 3: Dealing With Chained Rows

    In this portion of the lab we are going to create a new table and then manipulate some data to generate a series of chained rows within the table.  After you have generated this problem then we are going to go through the process of correcting the problem and tuning the table so that the chained rows are gone.  The process is a little tricky and is going to require you to think through your approach to some of the SQL.  Remember that every table has a hidden column named ROWID that is created implicitly by the system when the table is created.  This column can be queried just like any other column.  You will need this information in step 6 of this part of the lab. 

    1. For this part of the lab you will need to create a new user named GEORGE.  You can determine your own password but you want to make sure that the default tablespace is USERS and the temporary tablespace is TEMP.  Grant both the connect and resource rolls to the new user and then log in to create a session for the new user GEORGE.
    2. Once logged in to the new user then write the SQL to create a new table using the given column information and storage parameters listed below.  NOTE:  the parameters have been chosen intentionally so please do not change them.

    Table name: NEWTAB
    Columns: Prod_id       NUMBER
         Prod_desc VARCHAR2(30)
         List_price NUMBER(10,2)
         Date_last_upd DATE

    Tablespace:    USERS
         PCTFREE    10
         PCTUSED    90
         Initial and Next extents:    1K
         MinExtents    1
         MaxExtents    121
         PCTINCREASE    0

    1. Next, you will need to download both the UTLCHAIN.SQL and LAB2_FILL_NEWTAB.SQL scripts from the links shown.  First run the UTLCHAIN script in your SQL*Plus session and then run the LAB2_FILL_TAB script.  Be sure that you run them in the order just described.
    2. Now execute the ANALYZE command on the table NEWTAB to gather any chained rows.  HINT: refer back to the lecture material for this week and your text book. 
    3. Write and execute the query that will list the owner_name, table_name and head_rowid columns from the CHAINED_ROWS table.  You will have approximately 200+ rows in your result set so please do not copy/paste all of them into the lab report.  You only need the first 10 or 15 rows as a representation of what was returned.
    4. Now you need to go through the steps of getting rid of all the chained rows using these steps.  
      • You can create your temporary table to hold the chained rows of the NEWTAB table as a select statement based on the existing table.  HINT: CREATE TABLE NEWTAB_TEMP AS SELECT * FROM NEWTAB....  You want to be sure that you only pull data from the existing table that matches the data in the CHANED_ROW table.  To assure this you will need a WHERE clause to pull only this records with a HEAD_ROWID value in the CHAINED_ROWS table that matches a ROWID value for the NEWTAB table.  
      • Now you need to delete the chained rows from the NEWTAB table.  To accomplish this you will need a subquery that pulls the HEADROW_ID value from the CHAINED_ROWS table to match against the ROWID value in the NEWTAB table.  The number of rows deleted should be the same as the number that you retrieved in the query for part 5 of this section.
      • Now write an insert statement that will insert all of the rows of data in the temporary table that you created above into the NEWTAB table.  Be sure that you explicitly define the rows that you are pulling data from in the NEWTAB_TEMP table.
      • Next, write and execute the statement that will TRUNCATE the chained_rows table.
      • Now run the same ANALYZE statement you did in step 4 and then the query you did in part 5 above.  This time you should get a return message stating no rows selected.

    Be sure that you copy/paste all of the above SQL code and returned results sets and messages into the appropriate place in the Lab Report for this week.

    Deliverables     

    What is Due
    Submit your completed Lab 2 Report to the Dropbox as stated below.  Your report should contain copies of each query and result set outlined in the lab along with the requested explanation of whether or not it satisfied the business requirement outlined for that particular section of the lab.
     

    Learn More
  3. DBM 449 Lab 3 Sql File

    DBM 449 Lab 3 Distributed Database

    $20.00

    L A B O V E R V I E W    

    Scenario/Summary
    To the end user working with databases distributed through out a company's network is not different than working with multiple tables within a single database. The fact that the different databases exist in other locations should be totally transparent to the user. For this lab we are going to take on the roll of a database administrator in a company that has three regional offices in the country. You work in the central regional office, but there is also a West Coast Region located in Seattle and an East Coast Region located in Miami. Your roll is to gather report information from the other two regions.

    For this lab you are going to work with three different databases. You already have your own database instance. You will also be working with the a database named SEATTLE representing the West Coast Region and a database named MIAMI representing the East Coast Region. Login information for these two additional database instances is as follows:

    SEATTLE: Userid - seattle_user
    Password - seattle
    Host String - seattle

    MIAMI: Userid - miami_user
    Password - miami
    Host String - miami

    To record your work for this lab use the LAB3_Report.doc found in Doc Sharing. As in your previous labs you will need to copy/paste your SQL statements and results from SQL*Plus into this document. This will be the main document you submit to the Drop Box for Week 3.

    L A B S T E P S    
    STEP 1: Setting up Your Environment

    1. Be sure you are connected to the DBM449_USER schema that was created in lab 1. 
    2. To begin this lab you will need to download the LAB3_DEPTS.SQL script file associated with the link and run the script in your DBM449_USER schema of your database instance. This script contains a single table and that you will be using to help pull data from each of the other two database instances.  Notice that the DEPTNO column in this table is the PRIMARY KEY column and can be used to reference or link to the DEPTNO column in the other two database employee tables.
    3. Now you need to create a couple of private database links that will allow you to connect to your other two regional databases. To accomplish this use the connection information listed above in the Lab Overview section. Name your links using your database instance name together with the region name as the name for the link. Separate the two with an underscore (example - DB1000_SEATTLE).
    4. After creating both of your database links, query the USER_DB_LINKS view in the data dictionary to retrieve information about your database links.  The output from your query should look similar to what you see below.  You will need to set your linesize to 132 and format the DB_LINK and HOST columns to be only 25 bytes wide to get the same format that you see.

    DB_LINK                   USERNAME                       HOST                      CREATED
    ------------------------- ------------------------------ ------------------------- ---------
    DB1000_MIAMI              MIAMI_USER                     miami                     09-DEC-08

    STEP 2: Testing your Database Links
    Each of your remote databases has an employee data table. The tables are named SEATTLE_EMP and MIAMI_EMP respective to the database they are in. Using the appropriate database link, query each of the two tables to retrieve the employee number, name, job function, and salary. (HINT: you can issue a DESC command on each of the distributed tables to find out the actual column names just like you would for a table in your own instance.

    STEP 3: Connecting Data in the Seattle Database
    Write a query that will retrieve all employees from the Seattle region who are salespeople working in the marketing department. Show the employee number, name, job function, salary, and department name (HINT: The department name is in the DEPT table) in the result set.

    STEP 4: Connecting Data in the Miami Database
    Write a query that will retrieve all employees from the Miami region who work in the accounting department. Show the employee number, name, job function, salary, and department name (HINT: The department name is in the DEPT table) in the results set.

    STEP 5: Connecting Data in all Three Databases
    Now we need to increase our report. Write a query that will retrieve employees from both the Seattle and Miami regions who work in sales. Show the employee number, employee name, job function, salary and location name in the result set (HINT: The location name is in the DEPT table).

    STEP 6: Improving Data Retrieval from all Three Databases
    Writing queries like the ones above can be fairly cumbersome. It would be much better to be able to pull this type of data as though it was coming from a single table, and in fact this can be done by creating a view.

    1. Using the query written above as a guide, write and execute the SQL statement that will create a view that will show all employees in both the Seattle and Miami regions (you can use your own naming convention for the view name). Show all the employee number, name, job, salary, commission, department number and location name for each employee (HINT: The location name is in the DEPT table).
    2. Now write a query that will retrieve all the data from the view just created.

    Deliverables
    Submit your completed Lab 3 Report to the Dropbox. Your report should contain copies of each query and result set outlined in the lab along with the requested explanation of whether or not it satisfied the business requirement outlined for that particular section of the lab.

    Learn More
  4. MIS 562 Week 5 Homework Query Optimization

    MIS 562 Week 5 Homework Query Optimization

    $20.00

    MIS 562 Week 5 Homework Query Optimization

    Using the student schema from week 2, provide answers to the following questions.

    Question
    SQL statement or Answer
    1. Generate statistics for the student, enrollment, grade, and zipcode tables (15 pts)

    2. Write a query that performs a join, a subquery, a correlated subquery using the student, enrollment, grade, and zipcode tables. Execute each query to show that it produces the same results. (15 pts)

    3. Produce an autotrace output and explain plan for each query. (10 pts)

    4. Analyze the results and state which performs best and why. Write an analysis of what operations are being performed for each query. Determine which query is the most efficient and explain why (10 pts)

    Learn More
  5. CMIS420 PROJECT 2 Mail-Order Database DML and DDL statements

    CMIS420 Advanced Relational Database PROJECT 2 Mail-Order Database

    $25.00

    CMIS420 Advanced Relational Database PROJECT 2 Mail-Order Database

    Overview:
    Use SQL, PL/SQL, and Triggers to design and create a Mail-Order Database System. Please create your own data for testing purpose. Use the attached file "Project 2 Tables" as a guide to creating the tables. You should pre-populate the PARTS, CUSTOMERS, EMPLOYEE and ZIPCODES tables.

    Due Date:
    Check the due date in Syllabus for the exact date for this assignment. No project will be accepted after the due date.

    Deliverables:
    Turn in all SQL scripts in the form of a SQL script files. The script files should include,

    1. A script file containing all the DML and DDL statements. That is, the SQL used to create the tables and sequence and the SQL to pre-populate or insert records in the tables. Name this file XXX_PROJ2.sql, where XXX are you intials.
    2. A file containing the PL/SQL package (Specification and Body) that provides the functionality listed in the requirements below. Name this file XXX_PROJ2.pkg, where XXX are you initials.
    3. A file containing the database triggers. Name this file XXX_PROJ2.trg, where XXX are you initials.
    4. Finally, provide a test SQL*PLUS routine (PL/SQL anonymous block) that will test the PL/SQL functionality developed. Name this file XXX_PROJ2_tst.sql, where XXX are your initials.

    You should submit your assignment through WebTycho as you did for previous assignments.
    Use winzip or any zip software to package the four (4) files into one file called XXX_project2.zip, where XXX are your initials.

    Requirements:
    The Mail-Order Database consists of the following tables and attributes. Please ensure that all constraints are created when creating the tables. All constraints other than NOT NULL constraints must be named.
    1. EMPLOYEE(ENO, ENAME, ZIP, HDATE, CREATION_DATE, CREATED_BY, LAST_UPDATE_DATE, LAST_UPDATED_BY)
    2. PARTS(PNO, PNAME, QOH, PRICE, REORDER_LEVEL, CREATION_DATE, CREATED_BY, LAST_UPDATE_DATE, LAST_UPDATED_BY)
    3. CUSTOMERS(CNO, CNAME, STREET, ZIP, PHONE, CREATION_DATE, CREATED_BY, LAST_UPDATE_DATE, LAST_UPDATED_BY)
    4. ORDERS(ONO, CNO, ENO, RECEIVED, SHIPPED, CREATION_DATE, CREATED_BY, LAST_UPDATE_DATE, LAST_UPDATED_BY)
    5. ODETAILS(ONO, PNO, QTY, CREATION_DATE, CREATED_BY, LAST_UPDATE_DATE, LAST_UPDATED_BY)
    6. ZIPCODES(ZIP, CITY)
    7. ORDERS_ERRORS(TRANSACTION_DATE, ONO, MESSAGE)
    8. ODETAILS_ERRORS(TRANSACTION_DATE, ONO, PNO, MESSAGE)
    9. RESTOCK(TRANSACTION_DATE, PNO)

    • The EMPLOYEE table contains information about the employees of the company. The ENO (Employee Number) attribute is the primary key. The ZIP attribute is a foreign key referring to the ZIPCODES table.
    • The PARTS table keeps a record of the inventory of the company. The record for each part includes its number (PNO) and name (PNAME) as well as the quantity on hand (QOH), the unit price (PRICE) and the reorder level (REORDER_LEVEL). PNO is the primary key for this table.
    • The CUSTOMERS table contains information about the customers of the mail-order company. Each customer is assigned a customer number (CNO), which serves as the primary key. The ZIP attribute is a foreign key referring to the ZIPCODES table.
    • The ORDERS table contains information about the orders placed by customers, the employee who took the orders, and the dates the orders were received and shipped. Order number (ONO) is the primary key. The Customer number (CNO) attribute is a foreign key referring to the CUSTOMERS table, and the ENO attribute is a foreign key referring to the EMPLOYEES table.
    • The ODETAILS table contains information about the various parts order by the customer within a particular order. The combination of ONO and PNO attributes forms the primary key. The ONO attribute is a foreign key referring to the ORDERS table, and the PNO attribute is a foreign key referring to the PARTS relation.
    • The ZIPCODES table maintains information about the zip codes for various cities. ZIP is the primary key.
    • The ORDERS_ERRORS table contains information about any errors that occurred when an order is processed. Transaction date is the date of the transaction.
    • The ODETAILS_ERRORS table contains information about all errors that occur when processing an order detail. Transaction date is the date of the transaction.
    • The RESTOCK table contains information about all parts (PNO) that are below the reorder level. Transaction date is the date of the transaction.

    1. Write a package called Process_Orders to process customer orders. This package should contain four procedures and a function, namely;
    Add_order. This procedure takes as input customer number, employee number, and received date and tries to insert a new row in the Orders table. If the received date is null, the current date is used. The shipped date is left as null. If any errors occur, an entry is made in the Orders_errors table. A sequence called Order_number_seq should be used to populate the order number (ONO) column.
    Add_order_details. This procedure receives as input an order number, part number, and quantity and attempts to add a row to the Odetails table. If the quantity on hand for the part is less than what is ordered, an error message is sent to the Odetails_errors table. Otherwise, the part is sold by subtracting the quantity ordered from the quantity on hand for this part. A check is also made for the reorder level. If the updated quantity for the part is below the reorder level, an entry is made to the Restock table.
    Ship_order. This procedure takes as input an order number and a shipped date and tries to update the shipped value for the order. If the shipped date is null, the current date is used. If any errors occur, an entry is made in the Orders_errors table.
    Delete_order. This procedure takes as input an order number and tries to delete records from both the Orders and Odetails tables that match this order number. If any errors occur or there is no record that matches this order number, an entry is made in the Orders_errors table.
    Total_emp_sales. This function takes as input an employee number. It computes and returns the total sales for that employee.

    2. Create triggers on the PARTS, ORDERS, and ODETAILS tables to populate the CREATION_DATE, CREATED_BY, LAST_UPDATE_DATE, LAST_UPDATED_BY columns when an insert or update is made. Use SYSDATE and the pseudo column USER to populate these columns.

    3. Write a trigger that fires when a row in the Orders table is updated or deleted. The trigger should record the dropped order records in another table called deleted_orders. The deleted_orders table should also contain a date attribute that keeps track of the date and time the action (update or delete) was performed. This date is quite different from the CREATED_DATE and UPDATED_DATE from the Order table. Do not copy these dates to the deleted_order table. Please include the table creation script for the deleted_orders table in the script file.

    4. Create a sequence called order_number_seq that will be used to populate the order number (ONO) column.

    5. Write a PL/SQL anonymous block to test the above.

    Learn More
  6. Assignment 9-8 Maintaining an Audit Trail of Product Table Changes

    Oracle 11g PL/SQL Joan Casteel Chapter 9 Hands-On Assignments Part 9-5 to 9-8

    Regular Price: $30.00

    Special Price $25.00

    Oracle 11g PL/SQL Joan Casteel Chapter 9 Hands-On Assignments Part 9-5 to 9-8

    Assignment 9-5: Processing Discount
    Brewbean's is offering a new discount for return shoppers. Every fifth completed order gets a 10% discount. The count of orders for a shopper is placed in a packaged variable named pv_disc_num during the ordering process. The count needs to be tested at checkout to determine whether a discount should be applied. Create a trigger named BB_DISCOUNT_TRG so that when an order is confirmed (the ORDERPLACED value is changed from 0 to 1), the pv_disc_num packaged variable is checked. If it's equal to 5, set a second variable named pv_disc_txt to Y. This variable is used in calculating the order summary so that a discount is applied, if necessary.
    create a package specification named DISC_PKG containing the necessary packaged variables. Use an anonymous block to initialize the packaged variables to use for testing the trigger. Test the trigger with the following UPDATE statement:
    UPDATE bb_basket
    SET orderplaced = 1
    WHERE idBasket = 13;
    If you need to test the trigger multiple times, simply reset the ORDERPLACED column to 0 for basket 13 and then run the UPDATE again. Also, disable this trigger when you're finished so that it doesn't affect other assignments.


    Assignment 9-6: Use Triggers to Maintain Referential Integrity
    At times, Brewbean's has changed the ID numbers for existing products. In the past, developers had to add a new product row with the new id to the BB_PRODUCT table, modify all the corresponding BB_BASKETITEM and BB_PRODUCTOPTION table rows, and then delete the original product row. Can a trigger be developed to avoid all these steps and handle the update of the BB_BASKETITEM and BB_PRODUCTOPTION table rows automatically for a given change in product ID? If so, create the trigger and test by issuing an update statement, which changes the IDPRODUCT of 7 to 22. Do a rollback to return the data back to its original state. Also, disable the new trigger after you have completed the assignment.


    Assignment 9-7: Updating Summary Data Tables
    The Brewbean's owner uses several summary sales data tables every day to monitor business activity. The BB_SALES_SUM table holds the product ID, total sales in dollars, and total quantity sold for each product. A trigger is needed so that every time an order is confirmed or the ORDERPLACED column is updated to 1, the BB_SALES_SUM table is updated accordingly. Create a trigger named BB_SALESUM_TRG that perform this task. Before testing, reset the ORDERPLACED column to 0 for basket 3, as shown in the following code, and use this basket to test the trigger.
    UPDATE bb_basket
    SET orderplaced = 0
    WHERE idBasket = 3;
    Notice that the BB_SALES_SUM table already contains some data. Test the trigger with the following UPDATE statement, and confirm that the trigger is working correctly:
    UPDATE bb_basket
    SET orderplaced = 1
    WHERE idBasket = 3;
    Do a rollback and disable the trigger when you're finished so it doesn't affect the other assignments.


    Assignment 9-8: Maintaining an Audit Trail of Product Table Changes
    The accuracy of product table data is critical and the Brewbean's. owner wants to have an audit file that containing information on all DML activity on the BB_PRODUCT table. This information should indicate the ID of the user performing a DML action, the date, the original values of the changed row, and the new values. This audit table needs to track specific columns of concern, including PRODUCTNAME, PRICE, SALESTART, SALEEND, and SALEPRICE. Create a table named BB_PRODCHG_AUDIT that can hold the relevant data. Then create a trigger named BB_AUDIT_TRG that fires an update to this table whenever one of the specified columns in the BB_PRODUCT table is changed.
    Be sure to issue the following command. If you created the SALES_DATE_TRG trigger in the chapter, it conflicts with this assignment.
    ALTER TRIGGER SALES_DATE_TRG DISABLE;
    Use the following update statement to test your trigger:
    UPDATE bb_product
    SET salestart = '05-MAY-03', saleend = '12-MAY-03', saleprice = 9
    WHERE idproduct = 10;
    When you have finished, do a rollback and disable the trigger so that it does not affect other assignments.

    Learn More
  7. Assignment 9-4 Updating Stock Levels When an Order ls Cancelled

    Oracle 11g PL/SQL Joan Casteel Chapter 9 Hands-On Assignments Part 9-1 to 9-4

    Regular Price: $25.00

    Special Price $20.00

    Oracle 11g PL/SQL Joan Casteel Chapter 9 Hands-On Assignments Part 9-1 to 9-4

    Assignment 9-1: creating a Tigger to Handle Product Restocking
    Brewbean's has a couple of columns in the Product table to assist in inventory tracking. The REORDER column contains the stock level at which the product should be reordered. If the stock fall to this level. Brewbean's wants the application to insert a row in the BB_PRODUCT_REQUEST table automatically to alert the ordering clerk that additional inventory is needed. Brewbean's currently uses the reorder level amount as the quantity that should be ordered. This task can be handled by using a trigger.
    1. Take out some scrap paper and a pencil. Think about the tasks the triggers needs to perform. Including checking. whether the new stock level falls below the reorder point. If so, check whether the product is already on order by viewing the product request table; if not, enter a new product request. Try to write the trigger code on paper. Even though you learn a lot by reviewing code, you improve your skills faster when you create the code on your own.
    2. Open the c9reorder.txt file in the Chapter09 folder. Review this trigger code, and determine how it compares with your code.
    3. In SQL Developer, create the trigger with the provided code.
    4. Test the trigger with product ID 4. First, run the query shown in Figure 9-36 to verify the current stock data for this product. Notice that a sale of one more item should initiate a reorder.
    5. Run the UPDATE statement shown in Figure 9-37. It should cause the trigger to fire. Notice the query to check whether the trigger fired and weather a product stock request was inserted in the BB_PRODUCT_REQUEST table.
    6. Issue a ROLLBACK statement to undo these DML actions to restore data to its original state for use in later assignments.
    7. Run the following statement to disable this trigger so that it doesn't affect other projects:
    ALTER TRIGGER bb_reorder_trg DISABLE;


    Assignment 9-2: Updating Stock Information When a Product Is Filled
    Brewbean's has a BB_PRODUCT_REQUEST table where requests to refill stock level was inserted automatically via a trigger. After the stock level falls below the reorder level, this trigger fires and enters a request in the table. This procedure works great; however, when store clerks record that the product request has been filled by updating the table's DTRECD and COST columns. they want the stock level in the product table to be updated. Create a trigger named BB_REQFILL_TRG to handle this task, using the following steps as a guideline:
    1. In SOL Developer, run the following INSERT statement to create a product request you can use in this assignment:
    INSERT INTO bb_product_requect (idRequest, idProduct, dtRequest, qty)
    VALUES (3, 5, SYSDATE, 45);
    COMMIT;
    2. Create the trigger (BB_REQFILL_TRG) so that it fires when a received date is entered in the BB_PRODUCT_REQUEST table. This trigger needs to modify the STOCK column in the BB_PRODUCT table to reflect the increased inventory.
    3. Now test the trigger. First, query the stock and reorder data for product 5. as shown in Figure 9-38.
    4. Now update the product request to record it as fulfilled by using UPDATE statement shown in figure 9-39.
    5. Issue queries to verify that the trigger fired and the stock level of product 5 has been modified correctly. Then issue the ROLLBACK statement to undo modifications.
    6. If you aren't doing assignment 9-3, disable the trigger so that it doesn't affect other assignments.


    Assignment 9-3: Updating the Stock Level If a Product Fulfillment is Cancelled
    The Brewbean's developers have made progress on the inventory-handling processes; however, they hit a snag when a store clerk incorrectly recorded a product request as fulfilled. When the product request was updated to record a DTRECD value, the product stock level was updated automatically via an existing trigger, BB_REQFILL_TRG. If the clerk empties the DTRECD column to indicate that the product request has been filled, the product stock level need to be corrected or reduced, too. Modify the BB_REQFILL_TRG to solve this problem.
    1. Modify the trigger code from Assignment 9-2 as needed. Add code to check whether the DTRECD column already has a data in it and is now being set to NULL.
    2. Issue the following DML actions to create or update rows that you can use to test the trigger:
    INSERT INTO bb_product_request (idRequest, idProduct, dtRequest, qty, dtRecd, cost)
    VALUES (4, 5, SYSDATE, 45, '15-JUN-2012', 225);
    UPDATE bb_product
    SET stock = 86
    WHERE idProduct = 5;
    COMMIT;
    3. Run the following UPDATE statement to test the trigger, and issue queries to verify that the data has been modified correctly.
    UPDATE bb_product_request
    SET dtRecd = NULL
    WHERE idRequest = 4;
    4. Be sure to run the following statement to disable this trigger so that it doesn't affect other assignments:
    ALTER TRIGGER bb_reqfill_trg DISABLE;


    Assignment 9-4: Updating Stock Levels When an Order ls Cancelled
    At times, customers make mistakes in submitting orders and call to cancel an order. Brewbean's wants to create a trigger that automatically updates the stock level of all products associated with a cancelled order and updates the ORDERPLACED column of the BB_BASKET table to zero, reflecting that the order wasn't completed. Create a trigger named BB_ORDCANCEL_TRG to perform this task, taking into account the following points:
    The trigger need to fire when a new status record is added to the BB_BASKETSTATUS table and when the IDSTAGE column is set to 4, which indicates an order has been cancelled.
    Each basket can contain multiple items in the BB_BASKETITEM table, so a CURSOR FOR loop might be a suitable mechanism for updating each item's stock level
    Keep in mind that coffee can be ordered in half or whole pounds.
    Use basket 6, which contains two items, for testing.
    1.Run this INSERT statement to test the trigger:
    INSERT INTO bb_basketstatus (idStatus, idBasket, idStage, dtStage)
    VALUES (bb_status_seq.NEXTVAL, 6, 4, SYSDATE);
    2. Issue the queries to confirm that the trigger has modified the basket's order status and product stock level correctly.
    3. Be sure to run the following statement to disable this trigger so that it doesn't affect other assignments:
    ALTER TRIGGER BB_ORDCANCEL_TRG DISABLE;

    Learn More
  8. CMIS420 Project 3 Star Schema for OVS Database Spool FIle

    CMIS420 Project 3 Star Schema for OVS Database

    $20.00

    CMIS420 Project 3 Star Schema for OVS Database

    1) Complete Project #2 by ensuring that the FINANCING_PLANS, DEALERSHIPS, VEHICLES, and TIME (or TIMES if you desire) dimension tables and the SALES_FACTS fact table are created and populated.
    2) Create a user-defined function called VEHICLES_BY_VEHICLE_TYPE that receives an input parameter of a concatenated make and model and then queries the VEHICLES and SALES_FACTS tables to return the total vehicles sold by that combination. Execute your function for a sample input value of your choosing to demonstrate that it works correctly.
    3) Create a user-defined function called DOLLARS_BY_VEHICLE_TYPE that receives an input parameter of a concatenated make and model and then queries the VEHICLES and SALES_FACTS tables to return the total gross sales amount of the sales by that combination. Execute your function for a sample input value of your choosing to demonstrate that it works correctly.
    4) Create a stored procedure called STATS_BY_VEHICLE_TYPE that receives an input parameter of the concatenated make and model and then calls your two user-defined functions VEHICLES_BY_VEHICLE_TYPE and DOLLARS_BY_VEHICLE_TYPE. Execute your stored procedure for the same sample input values used earlier to demonstrate that it works correctly.
    5) Develop an SQL query to determine which holiday had the most sales and then drill down via another query to determine for that holiday which dealership, by zip code, had the most sales. Then drill down by another query to determine for that holiday which dealership, by zip code, and make had the most sales. Describe any correlation to the results of the query in Step #9 of Project #1 to see if dealership location matches customer location.
    6) Develop an SQL query to determine how many days, by day type, didn’t have more than 5 vehicles and $100,000 total sales. In order to achieve a fair comparison, analyze your results by the total number of weekdays, weekend days, and holidays in your TIMES table.

    All SQL and PL/SQL should be executed via one or more SQL script files. The best approach is to have separate SQL script files for each step and then an SQL script file that calls all of them in sequence. Submit SQL*Plus SPOOL files produced by your SQL and PL/SQL showing all your SQL and PL/SQL code and the results, or if using iSQL*Plus or other GUI (e.g. SQL Developer), a single Word or PDF file of screen snapshots showing both your SQL and PL/SQL and the results. Do NOT submit your SQL script file.

    Learn More
  9. DBM 405 Lab 7 Front End GUI

    DBM 405 Lab 7 Study Case Front-End GUI

    $20.00

    DBM 405 Lab 7 Study Case front-end GUI

    Scenario/Summary
    The More Movies company has hired you to redesign a database system for them that can facilitate the process of renting out and returning movies.
    They already have an Oracle database that stores information about movies, members who rent the movies, and the rentals. This is the database that you already have become familiar with and the one which includes tables: MM_MOVIE, MM_MOVIE_TYPE, MM_MEMBER, MM_RENTAL, and MM_PAY_TYPE. The machine on which this database is running has both the server and client Oracle9i software installed on it. Every night, a clerk updates data to account for the day's activities, and periodically the reports are run to summarize business, show renting trends, etc. Access to the database is accomplished using a SQL*Plus environment that is very similar to the iSQL*Plus that you know from the previous database course. This business process worked okay for as long as More Movies stayed a very small business.
    However, the company has grown substantially, expanding its operations to more movie selection and more members, and consequently, it has moved to a larger location. It occupies a two-story shop now. It became very impractical to record rentals at the end of the day. They also do not want to rely on clerks knowing any SQL programming in order to record updates and run reports.
    In short, there is a need for a more convenient database system. The machine on which the database is currently running is powerful enough to host the database server. The database should be accessible from four checkout stations that process renting out and returning movies. This system should have an easy-to-use graphical user interface access.
    For the lab, you will be creating several documents to be submitted for the lab. Be sure that you save the documents with your last name and lab7 in the file name. Place all documents into a single ZIP file and submit for grading.

    LAB STEP
    Step 1:
    Describe what software you propose to use to develop the front-end GUI application for the new system. Be sure to justify your choice. Keep in mind portability, ease of use, scalability, and ability to update. What other options have you considered?

    Step 2:
    In setting up the servers and environment, do you propose to use middleware? If so, what kind, and where would you deploy it?

    Step 3:
    Provide a system diagram of the proposed system. Be sure to include such things as servers (application and database), user clients, and any other special pieces to the puzzle that you might think of.

    Step 4:
    Provide a detailed design of the GUI screen that facilitates renting out and returning movies. For every button, or other component that provides reaction to user's events, give detailed pseudocode. Also, clearly indicate where you would use any of the PL/SQL code that you developed for the labs in this course. If the application platform you have selected does not support PL/SQL then describe how you would take the processing developed in the procedures and functions and incorporate it into the system.

    This concludes the Lab for Week 7.

    Learn More
  10. DBM 405 Lab 2 Simple PL SQL Applications

    DBM 405 Lab 2 Simple PL/SQL Applications Advanced Database Oracle

    $20.00

    DBM 405 Lab 2 Simple PL/SQL Applications Advanced Database Oracle

    Scenario/Summary
    The purpose of this week's lab is to work with basic PL/SQL syntax to create an anonymous block of code. In the lab, you will be using SQL*Plus to modify one of the tables in the MovieRental schema and then write a simple block of code to update the table with some new data and then execute the code in SQL*Plus. As an additional task in the lab, you will be asked to modify the existing PL/SQL block of code given to you to add exception handling and then execute it in SQL*Plus. Both of these concepts will help enforce the material covered in this second week.
    For the lab, you will need to create a script file containing the PL/SQL code that will address the lab steps below. Run the script file in your SQL*Plus session using the SET ECHO ON session command at the beginning to capture both the PL/SQL block code and output from Oracle after the block of code has executed. To successfully test the code in Step 3, you will need to copy/paste your code into SQL*Plus for each movie ID as you change the value for the host variable. Spool your output to a file named with your last name plus lab 2 and give the file a text (.txt) extension. For example, if your last name was Johnson then the file would be named johnson_lab2.txt. Submit both the spooled output AND the script file for grading of the lab.

    LAB STEP
    Step 1:
    As business is becoming strong and the movie stock is growing for More Movie Rentals, the manager wants to do more inventory evaluations. One item of interest concerns any movie for which the company is holding $75 or more in value. The manager wants to focus on these movies in regards to their revenue generation to ensure the stock level is warranted. To make these stock queries more efficient, the application team decides that a column should be added to the MM_MOVIE table named STK_FLAG that will hold a value '*' if stock is $75 or more. Otherwise, the value should be NULL. Add the new column to the MM_MOVIE table as a CHAR data type.
    Execute a DESC MM_MOVIE on the table both before you add the new column and after the column is added.
    Note: Since this is code will be in your script file, you will need to comment it out after the first time you have execute the ALTER TABLE statement successfully to avoid getting errors each additional time your script file is run.

    Step 2:
    Create an anonymous block of PL/SQL code that contains a CURSOR FOR loop to accomplish the task described above in Step 1. Your loop will need to interrogate the value (using an IF statement) found in the movie_qty field of the cursor loop variable to see if it is >= 75. If this is true then you will need to update the new column in the table with an '*' WHERE CURRENT OF the table. If the quantity is not >= 75 (the ELSE side of the IF statement) then update the new column with a NULL.
    Execute a SELECT * from MM_MOVIE both before and after you execute the new PL/SQL block of code to show that the process works.

    Step 3:
    Here is a block that retrieves the movie title and rental count based on a movie ID provided via a host variable.
    SET SERVEROUTPUT ON
    VARIABLE g_movie_id NUMBER
    BEGIN
    :g_movie_id := 4;
    END;
    /
    DECLARE
    v_count NUMBER;
    v_title mm_movie.movie_title%TYPE;
    BEGIN
    SELECT m.movie_title, COUNT(r.rental_id)
    INTO v_title, v_count
    FROM mm_movie m, mm_rental r
    WHERE m.movie_id = r.movie_id
    AND m.movie_id = :g_movie_id
    GROUP BY m.movie_title;
    DBMS_OUTPUT.PUT_LINE(v_title || ': ' || v_count);
    END;
    /
    Modify the block of code to add exception handlers for errors that you can and cannot anticipate. You will need to execute the entire code listing shown above each time you wish to test it by changing the value of :g_movie_id for each test.
    Once finished, test your exception handling by running the modified block for the following values of :g_movie_id. Be sure that you can capture the value in the :g_movie_id host variable.
    •    12 - normal output will display title and number of rentals
    •    13 - exception - there is no movie ID for 13
    •    1 - exception - Movie with ID 1 has never been rented

    This concludes the Lab for Week 2.

    Learn More

Items 1 to 10 of 36 total

per page
Page:
  1. 1
  2. 2
  3. 3
  4. 4

Grid  List 

Set Descending Direction
[profiler]
Memory usage: real: 15466496, emalloc: 14902704
Code ProfilerTimeCntEmallocRealMem