2014年4月8日火曜日

Week 12
Constant time access and review
We studied constant time access and reviewed all materials which we leaned in this course. 
Python dictionaries can insert, find item in constant time. The references to elements of the list are stored in consecutive memory locations. 
Python lists are accessed by index. A string, or a tuple, can be converted into an integer, using a hash function. Python has a built_in function called that converts immutable object to 32 or 64 bit integers.

After I learned Constant time access, I reviewed lectures from week 1 to week 11. I found that I still did not understand the structure of LinkedList. I will review it carefully to prepare for Final exam.
 
 

2014年3月31日月曜日

Week 11
Sorting and Efficiency 
I leaned sorting and efficiency in the class. We had to measure algorithm performance, which means that we leaned how to measure the running time. If input gets bigger, programs take more time to run it. We focused on size of the input and it is called n.
There are many types of input:
constant: c, logarithmic: c log n, linear: cn, cubic: cn^3, exponential: c2^n,,,

There are many types of sorting and some of them have different running time.

quick sort: O(n^2)
Quick sort chooses a pivot element in a list. It moves everything smaller than the pivot to one list (left sublist) and everything larger than the pivot to another list (right sublist). Then, quicksort the left sublists and right sublists. Then sorted list is made of left subtree, pivot, and right subtree. n steps are required to choose a pivot and n steps are required to compare each item in a list with the pivot.

merge sort: O(n log n)
Merge sort divides the list in half and mergesort the two divided list. Then merge the two sorted sorted halves in linear time. Merge sort divides a list log n steps and compare the each item in a list n times.
 


 

2014年3月22日土曜日

Week10:
Sorting big-Oh
In this week, I leaned some techniques to make codes of A2 during the lecture, and some sorting methods and their running time.  I thought I completed the A2, but I found a huge mistake in regex_match. It is that I processed a regular expressions of Regex (e.g. '(1|0)') to find if it matches a String. I had to process RegexTree instead of the regular expression. I rewrote every code in regex_match.

Searching linear takes more time to find a data because we must research every element in the list. There are some methods which find a value faster than linear search. 

Quick sort: First, pick up one item from the list and compare it with each item in the list. If item in the list is greater than the item picked,put it on left side of the picked item. If item in the list is less than the item picked, put it on right side of the picked item. Then, use quick sort to the items in left side of the picked item and items in right side of picked item.

Merge sort: First, divide a list into two list. Then compare last items of each list and append bigger items to the empty list and pop the item from formal list. Continue this work until two list become empty. Finally, reverse the new list which contains every item.  
 
Week9 
BSTs and big-Oh
This week I leaned binary search tree and big-Oh. Also I finished is_regex and all_regexpermutations and build_regex of A2 in this week. However, regex_match was harder than the other cases, because I must consider special cases for StarTree and Leaf epsilon. StarTree and Leaf of Epsilon match empty string therefore I have to make more if statements.

I think the definition of binary tree is that each node of tree has 0 or 1 or 2 nodes, and the data in left subtree is less than the data in right subtree. This structure makes it easier to find a value in the tree and delete a value from the tree. For example, if data to delete is less than at node, it deletes from left and return it's node. If data to delete is mode than at node, it deletes from right and return it's node.

Searching a value in binary tree is more efficient than searching linear list. If I research linear list, we must investigate every element in the list. However, if I research binary tree, I am allowed o ignore half the list. If the data I want to get is less than the node, I can ignore the right subclass and search only left subtree.

Running time means time to run the code. We can find the running time of each code and find the best code to solve questions faster than the other codes. If an algorithm runs in number of steps no more than cg(n), we say the algorithm is O(g(n)). This means that running time of the algorithm faster than cg(n) as the size of input grows.


 

2014年3月10日月曜日

Week 8
Linked List

I learned Linked List this week. I did difficult excercise during lab and I could not finish the tasks. It is difficult to describe the relationships between each values of linked list. The lectures of this week helped me to understand the Linked list deeper.

There are two ways to understand the Linked List.
1.  The linked list is made up of a value and the remaining list. This means that a value, such as list,  has a sublist. And the sublist has more sublist.
2. Objects has a value and a reference to other similar objects. This means that a value refers to the other value, furthermore it refers to the other value.



2014年3月3日月曜日

Week 7:
Recursion and Linked Structures

 Recursion is defined as "defining something in terms of itself". The recursion function calls itself in its function body therefore we don't have to make duplicate codes. For example, if we want to find the number of objects in nested list, we must check each nesting list. Then we can utilize the recursion function. If the element in the list is an object, the function counts it as 1 object. If it is a smaller list in the list, the function calls itself and check the inside of the smaller list. Every recursion function has a base case, which checks if the function has to call itself in function body, therefore it automatically solves the problem and we do not have to check a condition for each sub problems.

 In my opinion, the recursion is very useful for programmers because it omits duplicate codes and makes function body shorter. However, the recursion functions sometimes are difficult to read and understand what they do. Tracing the recursion functions precisely is very difficult. In conclusion, if we want to write the codes efficiently, we should write recursion.

2014年2月22日土曜日

Week 6
Recursive Structures.
In this week, I learned some terminologies and structures of Trees. 
Tree is constructed by nodes. One node is distinguished as root, which is the biggest parent, and each non_root node has exactly one parent. 
Node with no children is called as leaf, and node with one or more children is called as internal node.

There are three ways to visit root and subtrees.
Pre-order traversal - visit root, then pre-order left subtree, then preorder right subtree.
In-order traversal - visit in-order left subtree, then root, the in-order right subtree.
Post-order traversal - visit post-order left subtree, then post-order right subtree, then root.

After the lecture, I played with the examples of Tree which the professor posted on the website and I understood the structures of the above three methods. Also I found the best method to move cheeses from first stool to the fourth stool with minimum moves and completed the Assignment 1.