Welcome to AssignmentCache!

Search results for 'DBM 502 Week 6'

Items 11 to 20 of 619 total

per page
Page:
  1. 1
  2. 2
  3. 3
  4. 4
  5. 5

Grid  List 

Set Descending Direction
  1. DBM 380 Week 3 Art Museum Access Database

    DBM 380 Week 3 Entity Relationship Diagram

    $15.00

    Individual ERD Creation Project The following assignment is based on the database environment chosen and discussed in the Week Two Individual Assignment. Use a Microsoft® Visio® diagram to create a detailed ERD using the data specifications noted in the Week Two Individual Assignment. Make any necessary changes provided in your faculty’s feedback. · Use a Microsoft® Access® database to create the preliminary database tables, columns with data types, primary keys, and relationships. Learn More
  2. DBM 380 Week 4 Art Museum Nomalized Access Database

    DBM 380 Week 4 Normalization of ERD

    $15.00

    Individual Normalization of the ERD The following assignment is based on the database environment chosen and created in the Week Three Individual Assignment. Your database project must meet the following assessment requirements: Design and develop a database using professional principles and standards. · Provide a logical and physical design of the database. · Use relational database software application to develop database. · Provide an entity relationship diagram. · Normalize the database. · Generate and provide test data. Use a Microsoft® Visio® diagram to normalize the ERD to third normal form (3NF). Use the Microsoft® Access® database created in Week Three to create a minimum of 10 rows of test data in each table. Also, create at least one query that joins two tables and returns values from both tables. Note. Only the Microsoft® Visio® diagram must be normalized to the 3NF. The 3NF is not required for a Microsoft® Access®database. Submit the ERD and final database to the appropriate Assignment link. Learn More
  3. DBM 261 Week 2 Access database Riordan Manufacturing

    DBM 261 Week 2 Database Creation Riordan Manufacturing

    $20.00

    Create a Microsoft Access database using Riordan Manufacturing’s 2005 sales figures. Normalize the tables to the third normal form. Create relationships among tables using the relationship tool. These relationships must include settings for referential integrity and cardinality. Create an extended ERD of your newly created database, using Microsoft Word, PowerPoint, or Visio. Include primary keys, foreign keys, fields, relationships, and cardinality in the ERD. Learn More
  4. DBM 261 Week 3 Access Database Riordan Manufacturing

    DBM 261 Week 3 Query Building Exercise

    $20.00

    Build queries through SQL along with the query-building tool with sales databases built in Week Two for Riordan Manufacturing. Create a separate table for query information. Include the query name and query description. Enter data into the fields, naming and describing as many queries as you can, and then write two thirds of the queries using the query-building tool. Write the remaining queries in SQL. Create a new query without the tools and enter the SQL into its view. Test your queries to verify that they return the data you described. Learn More
  5. DBM449 LAB1 SqlFile

    DBM 449 LAB 1 Oracle Joins

    $20.00

    GENERAL OVERVIEW
    Scenario/Summary
    My colleague, Ann Henry, operates a regional training center for a commercial software organization. She created a database to track client progress so she can analyze effectiveness of the certification program. CLIENT, COURSE, and COURSE_ACTIVITY are three of the tables in her database. The CLIENT table contains client name, company, client number, pre-test score, certification program and email address. The COURSE_ACTIVITY table contains client number, course code, grade, and instructor notes. The COURSE table contains the course code, course name, instructor, course date, and location. Although she and her instructors enter much of the data themselves, some of the data are extracted from the corporate database and loaded into her tables.

    Loading the initial data was easy. For grade entry at the end of each course, a former employee created a data entry form for the instructors. Updating most client information and generating statistics on client progress is not easy because Ann does not know much SQL. For now, she exports the three tables into three spreadsheets. To look up a grade in the COURSE_ACTIVITY spreadsheet, she first has to look up client number in the CLIENT spreadsheet. While this is doable, it is certainly not practical. For statistics, she sorts the data in the COURSE_ACTIVITY spreadsheet using multiple methods to get the numbers she needs.

    Every month, Ann's database tables need to be refreshed to reflect changes in the corporate database. Ann describes this unpleasant task. She manually compares the contents of newly extracted data from corporate to the data in her spreadsheets, copies in the new values, and then replaces the database contents with the new values.

    Ann needs our help. Let’s analyze her situation and determine what advanced SQL she could use to make her tasks easier.
         
    L A B O V E R V I E W

    Scenario/Summary

    The purpose of this lab is to explore join operators to determine which, if any, are appropriate for solving Ann's business problems, as described in this week's lecture.

    Since Ann prefers to work from Excel spreadsheets, she wants her CLIENT and COURSE_ACTIVITY tables exported into one spreadsheet rather than two, as she is currently using. We need to determine which, if any, of the join operators will provide the data she wants for the single spreadsheet. (Note: we will not perform the export, just determine how to retrieve the necessary data.) Using the spreadsheet, she will be able to determine:

    1. Which course(s) a specific client has taken
    2. What grade(s) a specific client has earned in a specific course
    3. Which clients did not take any courses
    4. Which courses were not taken by any client

    Here are results from DESCRIBE commands that show structure (columns and their data types) of tables CLIENT and COURSE_ACTIVITY. You may refer to it while constructing your queries.

    SQL*Plus: Release 10.2.0.1.0 — Production on Thu Jun 14 22:38:52 2007

    Copyright (c) 1982, 2005, Oracle.  All rights reserved.

    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 — 64bit Production
    With the Partitioning, OLAP and Data Mining Options

    SQL> desc course_activity


    SQL>

    For this lab you will be creating several documents. First, write your queries in Notepad to create a script file that will contain all of the queries asked for in lab steps 4 through 13. You can (and should) test each query as you write it to make sure that it works and is returning the correct data. Once you have all of your queries written then create a SPOOL session and run your entire script file. Be sure that you execute a SET ECHO ON session command before running the file so that both the query and the output will be captured in the SPOOL file. IMPORTANT: If you are using Windows Vista you will need to create a directory on your C: drive to SPOOL your file into. Vista will not allow you to write a file directly to the C: drive. This will give you two files for the lab. The third file will the be the Lab1 Report document found in Doc Sharing. You will need to put your responses to the questions asked in the various lab steps.

    Now let's begin.

    L A B S T E P S

    STEP 1: Start Oracle SQL*Plus via Citrix

    Log into the Citrix iLab environment. Open your Oracle folder, select SQL Plus and log in to your database instance. Use "sys" as User Name, and "oracle" as the Password. Enter the Host String as "DB9999.world as sysdba" where 9999 is the database number you have been assigned.

    STEP 2: Initialize tables

    Download the pupbld.sql and Lab1_init.sql files associated with the links to your C: drive or to the F: drive in your Citrix environment. You will need to open each of the files and edit the connection string to reflect your instance name. The pupbld.sql file has two connections strings; one at the top of the script and another at the bottom. Be sure to change both of these to reflect your instance name.

    Once you have done this then run the pupbld.sql script first (DO NOT copy and paste it) in your SQL*Plus session. The script will create the product_user_profile synonym in the SYSTEM account which will be used each time you log in as a normal user.

    Next run the lab1_init.sql script in your session. The script will create a new user (DBM449_USER) that will be used in various labs in this course. You can find the password for this new user by looking at the CREATE USER statement in the script file. Disregard the DROP TABLE error messages. They occur because the script is designed to work regardless of whether you have already created the tables or not. This way, you may run it if you ever decide to resent the contents of your tables to the original values. When you run the script for the first time, the error messages appear as you attempt to drop tables that do not exist.

    Once the script has finished you will be logged into the new user and ready to start your lab.

    STEP 3: Verify your tables

    You want to verify that everything completed successfully. To do this execute a SELECT * FROM TAB statement to make sure all 5 tables were created and then you can execute a SELECT COUNT(*) FROM statement using each of the table names. You should find the following numbers of records for each table.

    • CLIENT table - 5 rows
    • COURSE table - 5 rows
    • COURSE_ACTIVITY table - 6 rows
    • CORP_EXTRACT1 table - 3 rows
    • CORP_EXTRACT2 table - 0 rows

    NOTE: In the following steps when writing your queries be sure to list the tables in the FROM clause in the same order they are listed in the instructions. Reversing the order of the tables in the FROM clause will produce an incorrect results set

    STEP 4: Using the FULL OUTER JOIN operator

    Join the CLIENT and COURSE_ACTIVITY tables using a FULL OUTER JOIN.

    • Write and execute the SQL statement that produces the client number and name, course code and grade that the client got in this course.

    Will the FULL OUTER JOIN be helpful to Ann? Place your response in the lab report document for this step.

    STEP 5: Using the RIGHT OUTER JOIN operator

    Join the CLIENT and COURSE_ACTIVITY tables using a RIGHT OUTER JOIN.

    • Write and execute the SQL statement that produces the client number and name, course code and grade that the client got in this course.

    Will the RIGHT OUTER JOIN be helpful to Ann? Place your response in the lab report document for this step.

    STEP 6: Using the LEFT OUTER JOIN operator

    Join the CLIENT and COURSE_ACTIVITY tables using a LEFT OUTER JOIN.

    • Write and execute the SQL statement that produces the client number and name, course code and grade that the client got in this course.

    Will the LEFT OUTER JOIN be helpful to Ann? Place your response in the lab report document for this step.

    STEP 7: Using the NATURAL JOIN operator

    Join the CLIENT and COURSE_ACTIVITY tables using a NATURAL JOIN.

    • Write and execute the SQL statement that produces the client number and name, course code and grade that the client got in this course.
    • Will the NATURAL JOIN be helpful to Ann? Place your response in the lab report document for this step.

    STEP 8: Using the INNER JOIN operator

    Join the CLIENT and COURSE_ACTIVITY tables using a INNER JOIN.
    Write and execute the SQL statement that produces the client number and name, course code and grade that the client got in this course.

    Will the INNER JOIN be helpful to Ann? Place your response in the lab report document for this step.

    Write a conclusion based on the five steps above, which join - if any - should Ann use to populate the spreadsheet that can answer her questions.

    STEP 9: Using the UNION operator 

    Examine the clients and courses in Ann’s tables and the CORP_EXTRACT1 table using the UNION operator.

    • Write and execute the SQL statement that examines client numbers in CLIENT and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines client numbers in COURSE_ACTIVITY and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines course names in COURSE and CORP_EXTRACT1.

    Which of these statements, if any, will be helpful to Ann? Place your response in the lab report document for this step.

    STEP 10: Using the UNION ALL operator

    Examine the clients and courses in Ann’s tables and the CORP_EXTRACT1 table using the UNION ALL operator.

    • Write and execute the SQL statement that examines client numbers in CLIENT and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines client numbers in COURSE_ACTIVITY and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines course names in COURSE and CORP_EXTRACT1.

    Which of these statements, if any, will be helpful to Ann? Place your response in the lab report document for this step.

    STEP 11: Using the INTERSECT operator

    Examine the clients and courses in Ann’s tables and the CORP_EXTRACT1 table using the INTERSECT operator.

    • Write and execute the SQL statement that examines client numbers in CLIENT and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines client numbers in COURSE_ACTIVITY and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines course names in COURSE and CORP_EXTRACT1.

    Which of these statements, if any, will be helpful to Ann? Place your response in the lab report document for this step.

    STEP 12: Using the MINUS operator

    Examine the clients and courses in Ann’s tables and the CORP_EXTRACT1 table using the MINUS operator.

    • Write and execute the SQL statement that examines client numbers in CLIENT and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines client numbers in COURSE_ACTIVITY and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines course names in COURSE and CORP_EXTRACT1.

    Which of these statements, if any, will be helpful to Ann? Place your response in the lab report document for this step.

    STEP 13: Using subqueries

    Examine the clients and courses in Ann’s tables and the CORP_EXTRACT1 table using a subquery with NOT IN operator.

    • Write and execute the SQL statement that examines client numbers in CLIENT and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines client numbers in COURSE_ACTIVITY and CORP_EXTRACT1.
    • Write and execute the SQL statement that examines course names in COURSE and CORP_EXTRACT1.

    Which of these statements, if any, will be helpful to Ann? Place your response in the lab report document for this step.

    Deliverables
        
    What is Due

    Submit your spooled lab file with the queries and results sets along with the completed Lab 1 Report to the Dropbox. Your report should contain copies of each query and result set outlined in the lab along with the requested explanation of whether or not it satisfied the business requirement outlined for that particular section of the lab.

    Learn More
  6. DBM 449 Lab 2 Sql File

    DBM 449 lab 2 OEM Query optimization

    $20.00

    In this lab we will focus on several common performance tuning issues that one might encounter while working with a database.  You will need to refer to both your text book and the lecture material for this week for examples and direction to complete this lab.
    To record your work for this lab use the LAB2_Report.doc found in Doc Sharing. As in your first lab you will need to copy/paste your SQL statements and results from SQL*Plus into this document. This will be the main document you submit to the Drop Box for Week 2.

    L A B   S T E P S   
    STEP 1: Examine Query Optimization using OEM

    Oracle Enterprise Manager (OEM) provides a graphical tool for query optimization.  The tables that you will be using in this lab are the same ones that were created in the first lab in the DBM449_USER schema.

    1. Start OEM via Citrix iLab. If you need help or instructions on how to do this you can refer to the How_to_use_OEM_in_Citrix iLab.pdf file associated with this link.
    2. Select Database Tools icon from the vertical tool bar and Select SQL Scratchpad icon from the expanded tool bar. If you need help or instructions on how to do this your can refer to theExecuting_and_Analyzing_Queries_in_OEM.pdf file associated with this link.
    3. Write a SQL statement to query all data from table COURSE (you will need to connect as the DBM449_USER). Click on Execute. Take a screen shot that shows the results and paste that into the lab document.
    4. Click on Explain Plan. Take screen shot of the results and past that into the lab document.
    5. Write a comment how this query is executed.
    6. Write a SQL statement to query the course_name, client_name and grade from the COURSE, COURSE_ACTIVITY and CLIENT tables and order the results by course name, and within the same course by client name.
    7. Click on Explain Plan. Take screen shot of the results and past that into the lab document.
    8. Exit out of OEM at this point.
    9. Write a comment on how this query is executed.

    STEP 2: Examine Query Optimization using SQL*Plus

    In this portion of the lab we are going to use SQL*Plus to replicate what we did in Step one using OEM.  At the end of this part of the lab you will be asked to compare the results between the processes.

    1. Before you can analyze an SQL statement in SQL*Plus you first need to create a Plan Table that will hold the results of your analysis.  To do this you will need to download the UTLXPLAN.SQL file associated with this link and run this script in an SQL*Plus session while logged in as the DBM449_USER user.  Once the script has completed then execute a DESC command on the PLAN_TABLE.
    2. Again you are going to write a SQL statement to query all data from table COURSE.  Remember to make the modifications to the query so that it will utilize the plan table that you just created.
    3. Now write the query that will create a results table similar to the one below by using the DBMS_XPLAN procedure.

    PLAN_TABLE_OUTPUT
    Plan hash value:  1263998123
         
      Id      Operation      Name      Rows      Bytes      Cost (%CPU)      Time
      0      SELECT STATEMENT          5      345      3    (0)      00:00:01
      1      TABLE ACCESSFULL      COURSE      5      345      3    (0)      00:00:01
        
    Note
    PLAN_TABLE_OUTPUT
    - dynamic spamling used for this statement

    1. Now execute the second query you used in Step 1 and then show the results in the plan table for that query.  HINT: Before you run your second query you will need to delete the contents of the plan table so that you will get a clean analysis.
    2. Write a short paragraph comparing the output from OEM to the output from the EXPLAIN PLAN process you just ran.  Be sure to copy/paste all of the queries and results set from this step into the lab report section for this step.

    STEP 3: Dealing With Chained Rows

    In this portion of the lab we are going to create a new table and then manipulate some data to generate a series of chained rows within the table.  After you have generated this problem then we are going to go through the process of correcting the problem and tuning the table so that the chained rows are gone.  The process is a little tricky and is going to require you to think through your approach to some of the SQL.  Remember that every table has a hidden column named ROWID that is created implicitly by the system when the table is created.  This column can be queried just like any other column.  You will need this information in step 6 of this part of the lab. 

    1. For this part of the lab you will need to create a new user named GEORGE.  You can determine your own password but you want to make sure that the default tablespace is USERS and the temporary tablespace is TEMP.  Grant both the connect and resource rolls to the new user and then log in to create a session for the new user GEORGE.
    2. Once logged in to the new user then write the SQL to create a new table using the given column information and storage parameters listed below.  NOTE:  the parameters have been chosen intentionally so please do not change them.

    Table name: NEWTAB
    Columns: Prod_id       NUMBER
         Prod_desc VARCHAR2(30)
         List_price NUMBER(10,2)
         Date_last_upd DATE

    Tablespace:    USERS
         PCTFREE    10
         PCTUSED    90
         Initial and Next extents:    1K
         MinExtents    1
         MaxExtents    121
         PCTINCREASE    0

    1. Next, you will need to download both the UTLCHAIN.SQL and LAB2_FILL_NEWTAB.SQL scripts from the links shown.  First run the UTLCHAIN script in your SQL*Plus session and then run the LAB2_FILL_TAB script.  Be sure that you run them in the order just described.
    2. Now execute the ANALYZE command on the table NEWTAB to gather any chained rows.  HINT: refer back to the lecture material for this week and your text book. 
    3. Write and execute the query that will list the owner_name, table_name and head_rowid columns from the CHAINED_ROWS table.  You will have approximately 200+ rows in your result set so please do not copy/paste all of them into the lab report.  You only need the first 10 or 15 rows as a representation of what was returned.
    4. Now you need to go through the steps of getting rid of all the chained rows using these steps.  
      • You can create your temporary table to hold the chained rows of the NEWTAB table as a select statement based on the existing table.  HINT: CREATE TABLE NEWTAB_TEMP AS SELECT * FROM NEWTAB....  You want to be sure that you only pull data from the existing table that matches the data in the CHANED_ROW table.  To assure this you will need a WHERE clause to pull only this records with a HEAD_ROWID value in the CHAINED_ROWS table that matches a ROWID value for the NEWTAB table.  
      • Now you need to delete the chained rows from the NEWTAB table.  To accomplish this you will need a subquery that pulls the HEADROW_ID value from the CHAINED_ROWS table to match against the ROWID value in the NEWTAB table.  The number of rows deleted should be the same as the number that you retrieved in the query for part 5 of this section.
      • Now write an insert statement that will insert all of the rows of data in the temporary table that you created above into the NEWTAB table.  Be sure that you explicitly define the rows that you are pulling data from in the NEWTAB_TEMP table.
      • Next, write and execute the statement that will TRUNCATE the chained_rows table.
      • Now run the same ANALYZE statement you did in step 4 and then the query you did in part 5 above.  This time you should get a return message stating no rows selected.

    Be sure that you copy/paste all of the above SQL code and returned results sets and messages into the appropriate place in the Lab Report for this week.

    Deliverables     

    What is Due
    Submit your completed Lab 2 Report to the Dropbox as stated below.  Your report should contain copies of each query and result set outlined in the lab along with the requested explanation of whether or not it satisfied the business requirement outlined for that particular section of the lab.
     

    Learn More
  7. DBM 449 Lab 3 Sql File

    DBM 449 Lab 3 Distributed Database

    $20.00

    L A B O V E R V I E W    

    Scenario/Summary
    To the end user working with databases distributed through out a company's network is not different than working with multiple tables within a single database. The fact that the different databases exist in other locations should be totally transparent to the user. For this lab we are going to take on the roll of a database administrator in a company that has three regional offices in the country. You work in the central regional office, but there is also a West Coast Region located in Seattle and an East Coast Region located in Miami. Your roll is to gather report information from the other two regions.

    For this lab you are going to work with three different databases. You already have your own database instance. You will also be working with the a database named SEATTLE representing the West Coast Region and a database named MIAMI representing the East Coast Region. Login information for these two additional database instances is as follows:

    SEATTLE: Userid - seattle_user
    Password - seattle
    Host String - seattle

    MIAMI: Userid - miami_user
    Password - miami
    Host String - miami

    To record your work for this lab use the LAB3_Report.doc found in Doc Sharing. As in your previous labs you will need to copy/paste your SQL statements and results from SQL*Plus into this document. This will be the main document you submit to the Drop Box for Week 3.

    L A B S T E P S    
    STEP 1: Setting up Your Environment

    1. Be sure you are connected to the DBM449_USER schema that was created in lab 1. 
    2. To begin this lab you will need to download the LAB3_DEPTS.SQL script file associated with the link and run the script in your DBM449_USER schema of your database instance. This script contains a single table and that you will be using to help pull data from each of the other two database instances.  Notice that the DEPTNO column in this table is the PRIMARY KEY column and can be used to reference or link to the DEPTNO column in the other two database employee tables.
    3. Now you need to create a couple of private database links that will allow you to connect to your other two regional databases. To accomplish this use the connection information listed above in the Lab Overview section. Name your links using your database instance name together with the region name as the name for the link. Separate the two with an underscore (example - DB1000_SEATTLE).
    4. After creating both of your database links, query the USER_DB_LINKS view in the data dictionary to retrieve information about your database links.  The output from your query should look similar to what you see below.  You will need to set your linesize to 132 and format the DB_LINK and HOST columns to be only 25 bytes wide to get the same format that you see.

    DB_LINK                   USERNAME                       HOST                      CREATED
    ------------------------- ------------------------------ ------------------------- ---------
    DB1000_MIAMI              MIAMI_USER                     miami                     09-DEC-08

    STEP 2: Testing your Database Links
    Each of your remote databases has an employee data table. The tables are named SEATTLE_EMP and MIAMI_EMP respective to the database they are in. Using the appropriate database link, query each of the two tables to retrieve the employee number, name, job function, and salary. (HINT: you can issue a DESC command on each of the distributed tables to find out the actual column names just like you would for a table in your own instance.

    STEP 3: Connecting Data in the Seattle Database
    Write a query that will retrieve all employees from the Seattle region who are salespeople working in the marketing department. Show the employee number, name, job function, salary, and department name (HINT: The department name is in the DEPT table) in the result set.

    STEP 4: Connecting Data in the Miami Database
    Write a query that will retrieve all employees from the Miami region who work in the accounting department. Show the employee number, name, job function, salary, and department name (HINT: The department name is in the DEPT table) in the results set.

    STEP 5: Connecting Data in all Three Databases
    Now we need to increase our report. Write a query that will retrieve employees from both the Seattle and Miami regions who work in sales. Show the employee number, employee name, job function, salary and location name in the result set (HINT: The location name is in the DEPT table).

    STEP 6: Improving Data Retrieval from all Three Databases
    Writing queries like the ones above can be fairly cumbersome. It would be much better to be able to pull this type of data as though it was coming from a single table, and in fact this can be done by creating a view.

    1. Using the query written above as a guide, write and execute the SQL statement that will create a view that will show all employees in both the Seattle and Miami regions (you can use your own naming convention for the view name). Show all the employee number, name, job, salary, commission, department number and location name for each employee (HINT: The location name is in the DEPT table).
    2. Now write a query that will retrieve all the data from the view just created.

    Deliverables
    Submit your completed Lab 3 Report to the Dropbox. Your report should contain copies of each query and result set outlined in the lab along with the requested explanation of whether or not it satisfied the business requirement outlined for that particular section of the lab.

    Learn More
  8. DBM 449 Lab 4 Oracle Object Types

    DBM 449 Lab 4 Oracle Object type

    $20.00

    L A B O V E R V I E W

    Scenario/Summary
    For this lab you will begin by using the same set of tables that you used for Lab 1 so be sure that you are connected to Oracle as the DBM449_USER user. The objective of this lab will be to create a series of object-relational tables using the SQL*Plus editor that will allow data to be stored in a more "real-world" format. Data for your new tables can be found in the file Lab4_data.txt associated with this link. You will need to manipulate the data in various ways, but the file will give you access to the raw data to use.
    To record your work for this lab use the LAB4_Report.doc found in Doc Sharing. As in your previous labs you will need to copy/paste your SQL statements and results from SQL*Plus into this document. This will be the main document you submit to the Drop Box for Week 4.

    L A B S T E P S
    STEP 1: Create a table with a column data type

    Modify the design of the COURSE table created in iLab 1 to incorporate the use of the column abstract data type.

    1. Write and execute the SQL to create a single object type called COURSE_OBJ1 that contains both the attributes course code and course name. Remember that with abstract objects you must use the / after the CREATE statement to execute it.
    2. Next, write and execute the SQL to create a table called NEW_COURSE1 that contains COURSE_OBJ1 along with the original attributes from the original COURSE table. Keep in mind what attributes the new object type COURSE_OBJ1 contains. Your table should have a total of 4 individual columns when finished.
    3. Using the data from the LAB4_DATA file create and execute the insert statements to load the new table NEW_COURSE1. SUGGESTION: Using the Lab4_data file create a script file of your insert statements and then run the script file. Remember that you will need enclose some of the data in single quotes depending on if it is character, date, or numeric data.
    4. Run DESCRIBE command to describe structure of table NEW_COURSE1.
    5. SET DESCRIBE DEPTH 2 and run DESCRIBE NEW_COURSE1 again.
    6. Execute a SELECT statement to query the data from the new table (DO NOT use a SELECT * type query). Use the COLUMN column_name FORMAT A## session command to format columns within the table to keep the result set data from wrapping around. Be sure that you properly display data inside the object column. (HINT: When querying attributes of an abstract data type, you must use a correlation variable for the table.)

    STEP 2: Create an object table with a row data type
    Create a second COURSE table, this time as an object table using the row abstract data type.

    1. Write and execute the SQL to create an object called COURSE_OBJ2 that contains the attributes course code, course name, course date, instructor, and location.
    2. Write and execute the SQL to create a table called NEW_COURSE2 with a single column defined using the COURSE_OBJ2 object.
    3. Using the data from the LAB4_DATA file create execute the insert statements to load the new table NEW_COURSE2.
    4. Execute a SELECT statement to query the data from the new table (DO NOT use a SELECT * type query).

    STEP 3: Create a Varying Array
    Modify the design of the CLIENT table created in iLab 1 to incorporate the use of the Varying Array.

    1. Write and execute the SQL to create a Varying Array to represent the phone contact information for the client (up to 3 phone numbers). Name the varying array as PHONE_LIST.
    2. Write and execute the SQL to create a table called NEW_CLIENT that contains the attributes that the original CLIENT table contained plus the phone list array.
    3. Using the data from the LAB4_DATA file create execute the insert statements to load the new table NEW_CLIENT.
    4. Execute a SELECT statement to query the data from the CLIENT_NO and CLIENT_NAME columns along with the data in the column containing the phone number Varray (You cannot use a SELECT * type query for this step).

    Deliverables
    Submit your completed Lab 4 Report to the Dropbox. Your report should contain copies of each query and result set outlined in the lab along with the requested explanation of whether or not it satisfied the business requirement outlined for that particular section of the lab.

     

    Learn More
  9. DBM 449 Lab 5 SQL Audit and Profile Management

    DBM 449 Lab 5 Audit and Profile Management

    $20.00

    In your lab for this week you are going to work with three different areas and processes within the Oracle Database that can be used to control data security. Each of these three processes has its own distinctive application to providing levels of security. In each case the individual processes deal with either limiting a users access to the database, limiting access to processes within the database, or keeping track of what the user is doing while in the database.

    For the lab you will be using the SCOTT user which is already created in your instance. In Step 4 you will also be asked to shutdown you instance, make some edits to the init.ora file for your instance and then restart the instance. If you are not comfortable with this process which was first introduced to you in DBM438 the refer to the iLab Manual found in week 1 for guidance.

    To record your work for this lab use the LAB5_Report.doc found in Doc Sharing. As in your previous labs you will need to copy/paste your SQL statements and results from SQL*Plus into this document. This will be the main document you submit to the Dropbox for Week 5.

    LAB STEPS

    STEP 1: Define a New Profile

    Oracle provides the ability to set expirations, limit the reuse, and define the complexity of passwords. In addition, accounts can be locked if the password is entered incorrectly too many times. In this section of the lab we are going to create a custom profile that will then be applied to the SCOTT user.

    1. To begin, log into your instance as the SYS user.
    2. Write SQL script that will create a new profile named DBM449_SCOTT_PROFILE that will do the following:
      • Limit the number of failed login attempts to 3 in a row.
      • Limit the overall connection time to 10 hours (we will give him a little leeway incase he has to work overtime).
      • Allow a session to be idle no more than 1 hour.
      • Change the password every 60 days.
      • Allow the user 3 days to change the password after it expires.
      • Not allow a previous password be reused before there have been three password changes.
    3. Execute your pfile script and verify that the profile has been created by running a query against the DBA_PROFILES view in the data dictionary. Limit your output to ONLY the DBM449_SCOTT_PROFILE parameters.

    Be sure to copy/paste your script and results sets output to the appropriate section in the Lab5_report document.

    STEP 2: Testing the New Profile

    Now that we have a new profile for the SCOTT user we need to verify that it works properly. For obvious reasons there are going to be parts of the profile that we cannot test within the confines of this lab due to time constraints, but we can test to verify that the SCOTT user is being controlled by the profile.

    1. The first thing we need to do is assign the profile to the SCOTT user. While still logged into your instance as the SYS user write and execute the SQL command that will assign the new SBM449_SCOTT_PROFILE profile to the SCOTT user.
    2. Now log into SCOTT (password is TIGER). Remember that you must supply the database instance name when logging in from the SQL> prompt just as you do when using the login window, i.e. CONN SCOTT/TIGER@DB####.WORLD.
    3. There are several things that we can test related to the logging in and changing a password so here we go.
      • You should now be successfully connect to the SCOTT user. Write the connect command again on this time use an incorrect password. NOTE: you should get a warning message stating that you are no longer connected to Oracle. That is fine, just keep trying to log in.
      • Repeat the above process until you get the ORA-28000: the account is locked error which will indicate that the profile is working here.
      • At this point we need to get the account unlocked so you will need to login to your instance as the SYS user and unlock the SCOTT account BUT DO NOT LOG BACK INTO THE SCOTT USER YET.
      • Now we can test the password reuse parameter. To do this we must EXPIRE the current password. Write and execute the SQL command to expire the password for the SCOTT user.
      • Now log back into the SCOTT user. You should receive a message stating that the password has expired (ORA-28001: the password has expired) and then prompting you to change the password.
      • Try to reuse the TIGER password. You should receive the following - ORA-28007: the password cannot be reused.
    4. Now log into the SCOTT user again and this time change the password to LION to complete this step of the lab.

    Be sure to copy/paste your script and results sets output to the appropriate section in the Lab5_report document.

    STEP 3: Using the PRODUCT_USER_PROFILE table

    As the owner of a schema a user has certain inherited privileges that would allow the user to pass access to his/her own objects on to other users. Often times this can open up data to scrutiny by individuals who probably do not need to have access to it. These types of decisions should always be made by the DBA in charge of the database. One mechanism the DBA has to keeping users from using these inherited privileges is by excluding those commands using the PRODUCT_USER_PROFILE (PUP) table. In this section of the lab we are going to do this to the SCOTT user by setting up the scenario that will prohibit him from giving the user GEORGE (created in lab 2) access to the EMP table.

    1. For this section and remainder of the lab you must have the PRODUCT_USER_PROFILE successfully loaded and accessible in your instance. The creation of this profile was one of the first things done back in Lab 1 when you ran the PUPBLD.SQL script. If you are getting an error message stating "Error accessing PRODUCT_USER_PROFILE" when you log in as the DBM449_USER or the SCOTT user then this profile is not successfully installed. Work with your instructor to figure out why your script from Lab 1 did not work correctly. Until this is resolved you will not be able to complete the remainder of the lab.
    2. If you have the PRODUCT_USER_PROFILE successfully working then log in to your database instance as the SYS user.
    3. Now we need to limit SCOTT from being able to use the GRANT command.
      • Insert the proper values into the PRODUCT_USER_PROFILE table that will keep the SCOTT user from using the GRANT command. Remember that some of the values in your insert statement must be in upper case and some will need to be in mixed case. Once you have done this then query the table to verify the insert (REMEMBER: you cannot query the table as the SYS user, only as the SYSTEM user).
      • Now we need to test our above settings and make sure they are working.
      • Connect to the SCOTT user (remember that you changed the password to LION).
      • Write and execute the statement that would GRANT the user GEORGE the ability to write a select statement and see the data in the EMP table owned by SCOTT. You should receive the following message - SP2-0544: Command "grant" disabled in Product User Profile.
    4. This verifies that you have now disabled the ability of the SCOTT user to allow another user to access any of the data in his schema.

    Be sure to copy/paste your script and results sets output to the appropriate section in the Lab5_report document.

    STEP 4: Setting up the Database to use Auditing

    Being able to audit what, when and where people are doing things in the database can be a very enlightening thing for a DBA. It can also be a very important tool in working with Data Security. Oracle provides the ability to do various types of auditing, but it takes some special setting up of the environment for this to work. In this step we are going to make the necessary adjustments to the current Oracle instance so that we can enable auditing and make some tests. If you need to review the processes to be used here then refer to the iLab Manual in week 1.

    1. First you need to make sure that you are logged into your instance as the SYS user.
    2. At this point issue a SHUTDOWN IMMEDIATE command to shut down you database instance.
    3. Once the instance is shut down you need to go into your Citrix Windows Explorer application, find your database instance set of directory folders, drill down to the pfile directory folder and open your init.ora file found in that folder.
    4. Under the section titled "Security and Auditing" you need to add the parameter AUDIT_TRAIL and set the parameter to DB_EXTENDED. This will allow the SQL_TEXT column of the DBA_AUDIT_OBJECT view to be populated. Save and close the file and then go back to your SQL*Plus session.
    5. Now using the init.ora file, start your instance back up to an OPEN status. You can do this by issuing a STARTUP PFILE= statement and pointing to your init.ora file.
    6. Once you have completed this process you are ready to begin setting up the database to audit some activity.

    Be sure to copy/paste your script and results sets output to the appropriate section in the Lab5_report document.

    STEP 5: Creating an Audit Trail

    Oracle permits audit trails to be generated for session login attempts, access to objects, and activity performed on objects. Again using the SCOTT user we are going to set up several scenarios for auditing what SCOTT does while in a session. NOTE: if you need to work through this process several times you can delete the values in the AUD$ base table by issuing the TRUNCATE TABLE AUD$ command while logged in as the SYS user.

    1. Make sure that you are connected as user SYS.
    2. Display value of the parameter AUDIT_TRAIL. For the VALUE column you should have a value of DB_EXTENDED.
    3. Now we can set up auditing to track what goes on in the database.
      • Write SQL statements to audit successful and unsuccessful login attempts by SCOTT.
      • Write SQL statement to audit any successful INSERT, UPDATE or DELETE performed on table DEPT in scott's schema.
    4. Now we need to test the audits to verify that they work.
      • Log into the SCOTT user (remember that the password is LION) and perform the following:
      • write and execute an UPDATE statement that will change the value in the LOC column of the DEPT table to MIAMI where the DEPTNO value is 10.  Be sure to issue a COMMIT.
      • Write and execute the INSERT statement that will in insert the following values into DEPT - (50, 'LEGAL', 'HOUSTON').  Be sure to issue a COMMIT.
      • Write and execute the DELETE statement that will delete the row from the DEPT table that was just inserted in the step above.  Again, be sure to issue a COMMIT.
      • Try to reconnect to the SCOTT user with an invalid password.
      • Now connect back to the SYS user.

    Now we need to see if our auditing worked.

    1. While logged into your instance as the SYS user, query the DBA_AUDIT_OBJECT view of the data dictionary for the user name of the account (Not the OS), the object owner, the object name, the action name and the SQL command (text) from the DBA_AUDIT_OBJECT view in the Data Dictionary.
    2. Did you notice that the entries for successful logon and unsuccessful logon attempts were not there. Now query the user name, action name and return code values in the DBA_AUDIT_SESSION view. You should find that information here.

    Be sure to copy/paste your script and results sets output to the appropriate section in the Lab5_report document.

    Deliverables

    Submit your completed Lab 5 Report to the Dropbox. Your report should contain copies of each query and result set outlined in the lab along with the requested explanation of whether or not it satisfied the business requirement outlined for that particular section of the lab.

    Learn More
  10. DBM 449 Lab 6 SQL Analytical Extensions and Materialized Views

    DBM 449 Lab 6 SQL Analytical Extensions and Materialized Views

    $20.00

    For the lab this week we are going to look at how the ROLLUP and CUBE extensions available in SQL can be used to create query result sets that have more than one dimension to them. Both of these extensions are used in conjunction with the GROUP BY clause and allow for a much more broad look at the data.

    The first thing you will do for this lab is download the lab6_create.sql file and run the file in your database instance. This file will log into the DBM449_USER and then create and populate a set of tables that will be used for this lab.  Instructions for this are outlined in Step 1.

    To record your work for this lab use the LAB6_Report.doc found in Doc Sharing. As in your previous labs you will need to copy/paste your SQL statements and results from SQL*Plus into this document. This will be the main document you submit to the Dropbox for Week 6.

    LAB STEPS

    STEP 1: Setting up Your Instance

    For this lab you will be using a different user and set of tables than you have used so far for other labs. To set up your instance you will need to do the following.

    1. Download the lab6_create.sql file associated with the link to either the C drive on your computer or the F drive in your Citrix account.
    2. Open up the file and edit the login information at the top for the new user that is being created. You will need to replace the @ORACLE piece with the specifics for your instance name. DO NOT include AS SYSDBA after the name of your instance for this login.
    3. Now log into your instance as the SYS user. Run the script. The script is too long to copy/paste it into your SQL*Plus session so you should run the script using the @ sign from the SQL> prompt.
    4. Once the script has finished running then issue a SELECT * FROM TAB; sql statement. The result set will have tables from other labs as well but you want to make sure that you see the following tables listed.

    TNAME                          TABTYPE CLUSTERID
    ------------------------------ ------- ----------
    SUPPLIER                       TABLE
    PRODUCT                        TABLE
    DISTRICT                       TABLE
    CUSTOMER                       TABLE
    TIME                           TABLE
    SALES                          TABLE

     

    STEP 2: Using the ROLLUP Extension 

    In this section of the lab you are going to create a sales report that will show a supplier code, product code and the total sales for each product based on unit price times a quantity. More importantly the column that shows the total sales will also show a grand total for the supplier as well as a grand total over all (this will be the last row of data shown). To do this you will use the ROLLUP extension as part of the GROUP BY clause in the query. Use aliases for the column names so that the output columns in the result set look like the following.

    SUPPLIER CODE PRODUCT    TOTAL SALES
    ------------- ---------- -----------

    For this report you are going to use the SALES, PRODUCT and SUPPLIER tables. You should be able to write your query using NATURAL JOIN but if you feel more comfortable using a traditional JOIN method that will work just as well. When finished you should have a total of 16 rows in your report and the grand total amount should show 2810.74.

    Be sure to copy your SQL code and the result set produced and paste it into the appropriate place in the LAB6_REPORT.

    STEP 3: Using the CUBE Extension

    In this section of the lab you are going to create a sales report that will show a month code, product code and the total sales for each product based on unit price times a quantity. In this report the column that shows the total sales will also show a subtotal for each month (in this case representing a quarter) . Following the monthly totals for each product and the subtotal by month then the report will list a total for each product sold during the period with a grand total for all sales during the period (this will be the last row of data shown). To do this you will use the CUBE extension as part of the GROUP BY clause in the query. Use aliases for the column names so that the output columns in the result set look like the following.

         MONTH PRODUCT    TOTAL SALES
    ---------- ---------- -----------

    For this report you are going to use the SALES, PRODUCT and TIME tables. You should be able to write your query using NATURAL JOIN but if you feel more comfortable using a traditional JOIN method that will work just as well. When finished you should have a grand total amount of 2810.74 (same total as in the step 2).

    Be sure to copy your SQL code and the result set produced and paste it into the appropriate place in the LAB6_REPORT.

    STEP 4: Materialized Views and View Logs

    Materialized views, sometimes referred to as snapshots are a very important aspect of dealing with data when doing data mining or working with a data warehouse. Unlike regular views, a materialized view does not always automatically react to changes made in the base tables of the view. To help keep track of changes made to the base tables you must create what is call a Materialized View Log on each base table that will be used in the view. In this step of the lab we will do this.

    For the Materialized View we are going to create we are going to use the TIME and the SALES tables. Before we can create the view you will need to create a Materialized View Log on each of these two tables that will keep track of the ROWID and Sequence and include new values that have been added to the base table.

    Be sure to copy your SQL code and the result set produced and paste it into the appropriate place in the LAB6_REPORT.

    STEP 5: Creating and Using the Materialized View

    Now that we have our logs created we can progress on to the view itself. For this part of the lab you are going to create a Materialized View, demonstrate that the view works, insert a row of data into one of the base tables and then update the view. Finally, you will show that the new data is in the view. The following steps will help move you through this process.

    1. First, write the SQL CREATE statement that will create a Materialized View based on the following:
      • Name the view SALESBYMONTH.
      • Include clauses that will build the view immediately, completely refresh the view, and enable a query rewrite.
      • For the columns of the view you want to show the YEAR, MONTH, PRODUCT CODE, a TOTAL SALES UNITS, and a TOTAL SALES.
      • You will want to group the columns by year, month and product code respectively.
    2. Execute your script to create the view and then issue a SELECT * FROM SALESBYMONTH.

    The output columns from your view should look similar to the following (use aliases to format the column headings) and you should have 18 rows in the result set.


                                      YEAR      MONTH PRODUCT CO UNITS SOLD SALES TOTAL
                                  -------- ---------- ---------- ---------- -----------

    Now we are going to add some data and update the view. Because we have several derived columns in out view we will have to force the update as Oracle will not automatically update a view with this configuration.

    1. To begin with, insert the following data into the SALES table - (207, 110016, 'SM-18277',1,8.95).
    2. Now we are going to use a subprogram within the Oracle built in package DBMS_MVIEW. The REFRESH subprogram within this package will update our view so that we can see the new data.
    3. Write an SQL EXECUTE statement that will use the REFRESH procedure in the DBMS_MVIEW package (HINT: packagename.subprogram). The REFRESH subprogram accepts two parameters; the name of the materialized view to refresh, and either a 'c', 'f', or '?'. For the purposes of the lab use the 'c'. (you can refer back to pages 654-659 of the DBA Handbook readings for week 3).
    4. Execute your statement to update the view and then query the view once again.

    You should now see that the row for units sold in month 10 for SM-18277 has increased from 3 to 4 and total sales amount has gone from 26.85 to 35.80.

    Be sure to copy your SQL code and the result set produced and paste it into the appropriate place in the LAB6_REPORT.

    Deliverables

    Submit your completed Lab 6 Report to the Dropbox. Your report should contain copies of each query and result set outlined in the lab along with the requested explanation of whether or not it satisfied the business requirement outlined for that particular section of the lab.

    Learn More

Items 11 to 20 of 619 total

per page
Page:
  1. 1
  2. 2
  3. 3
  4. 4
  5. 5

Grid  List 

Set Descending Direction
[profiler]
Memory usage: real: 15204352, emalloc: 14658096
Code ProfilerTimeCntEmallocRealMem